EU says OpenAI offers to open access to cybersecurity model, Anthropic not there yet
The EU is pleased with OpenAI's offer to share its cybersecurity AI model, while Anthropic has not made a similar commitment.
Read on Economic Times Tech →AI deepfakes are increasingly used in US political campaigns, misleading voters with fabricated content and raising concerns about election integrity.
Why it matters
The proliferation of AI deepfakes in political campaigns poses a significant threat to democratic processes. By creating highly realistic fabricated content, these tools can manipulate public opinion, erode trust in electoral outcomes, and make it difficult for voters to discern truth from falsehood. The lack of robust regulatory frameworks and detection mechanisms amplifies these concerns, highlighting the urgent need for solutions to safeguard the integrity of elections in the digital age.
Computers can now make fake videos of politicians saying and doing things they never did. These fake videos are used in elections to trick voters, and it's hard to tell they are fake, which is bad for democracy.
The EU is pleased with OpenAI's offer to share its cybersecurity AI model, while Anthropic has not made a similar commitment.
Read on Economic Times Tech →OpenAI is being sued by a family claiming ChatGPT assisted a shooter in planning a mass shooting, with the lawsuit alleging the chatbot failed to flag dangerous conversations.
Read on Economic Times Tech →India's IT sector is experiencing a significant boom in AI job openings, with a projected growth of 15-20%, supported by investments in data centers and digital infrastructure.
Read on Economic Times Tech →