OpenAI releases a new safety blueprint to address the rise in child sexual exploitation
OpenAI has released a new safety blueprint to combat the misuse of AI in child sexual exploitation.
Read on TechCrunch →Anthropic, a leading AI company, has appointed Amlan Mohanty to head its policy and external affairs in India, a key market for its Claude AI.
Why it matters
This appointment highlights Anthropic's strategic commitment to navigating the complex and evolving AI policy landscape in India, a crucial market for its growth and AI development. By bringing in a seasoned policy expert with a background in responsible AI, Anthropic aims to proactively engage with Indian regulators, build local partnerships, and ensure its AI products are developed and deployed ethically and in line with national priorities. This move underscores the increasing importance for global AI companies to localize their policy engagement and adapt to regional regulatory frameworks.
AI company Anthropic has hired Amlan Mohanty to manage its relationships with the Indian government and other partners. This is important because India is a huge market for Anthropic's AI, Claude, and Mohanty will help ensure the company operates responsibly and effectively within India's rules and regulations.
OpenAI has released a new safety blueprint to combat the misuse of AI in child sexual exploitation.
Read on TechCrunch →AI Forensics, a campaign group, identified nearly 25,000 active users in Spanish and Italian Telegram groups trading nude images, often for cash, during a six-week study.
Read on Economic Times Tech →Super Micro is investigating alleged export-control violations involving a scheme to reroute billions in US AI technology servers to China, leading to personnel changes and a review of trade compliance.
Read on Economic Times Tech →