Ongoing Ukraine drone wars and Gaza AI-targeting controversies providing real data

Palantir, has employed an AI program for the war in Gaza,

Lavender, an AI program developed by the IDF’s Unit 8200 data science and AI center.

Palantir’s software has been associated with the use of facial recognition and biometrics technology through its government contracts ().

https://www.business-humanrights.org/es/%C3%BAltimas-noticias/palantir-allegedly-enables-israels-ai-targeting-amid-israels-war-in-gaza-raising-concerns-over-war-crimes/

Lavender AI system used by the Israeli military (IDF) in its conflict with Hamas in Gaza. The company most frequently associated in reports with providing the underlying technology and support for Israel’s AI targeting capabilities is Palantir Technologies

AI safety laws vary by jurisdiction, but major ones include the EU’s comprehensive Artificial Intelligence Act,

  • Prohibited practices: Several jurisdictions are banning specific uses of AI that are deemed harmful, such as deceptive AI, exploitation of vulnerabilities, and certain types of biometric identification.

•      Risk-based approach: A common strategy is to classify AI systems based on their potential for harm, with more stringent rules for high-risk applications like those affecting health, safety, and fundamental rights.

•      Transparency and accountability: Regulations often require documentation and record-keeping for high-risk systems to ensure compliance and allow for accountability.

•      Human oversight and privacy: Many laws emphasize the need for human oversight in AI systems and mandate that AI applications comply with data protection and privacy standards.

. European Union (EU): AI Act

In brief: What this regulation mandates

Security: The Act mandates that high-risk AI systems meet standards for robustness, accuracy, and cybersecurity. Providers must conduct risk assessments and implement human oversight to ensure system integrity.

Privacy: The AI Act complements the GDPR by enforcing transparency obligations, such as informing users when interacting with AI systems like chatbots or encountering AI-generated content.

USA: Executive Order 14179

In brief: What this regulation mandates

Security: The order prioritizes national security by directing agencies to enhance U.S. dominance in AI technologies. While it doesn’t establish new cybersecurity requirements, it mandates that the federal government identify and remove existing policies that could obstruct the secure development of AI systems critical to national interests.

Posted in

Leave a comment