cookie

Мы используем файлы cookie для улучшения сервиса. Нажав кнопку «Принять все», вы соглашаетесь с использованием cookies.

avatar

AI & Law

Your go-to source for global AI Governance news. #AI Compliance #AIEthics Russian version https://t.me/ai_and_law_rus Contact @mmariuka

Больше
Рекламные посты
444
Подписчики
Нет данных24 часа
+37 дней
+1030 дней

Загрузка данных...

Прирост подписчиков

Загрузка данных...

AI Copyright Battle Heats Up: NYT Denies OpenAI Access to Reporters' Notes The ongoing copyright dispute between The New York Times and OpenAI has escalated, with OpenAI demanding access to journalists' notes and memos during the discovery phase of the lawsuit. The Times claims OpenAI's ChatGPT infringed on copyrights by using "near-verbatim excerpts" from their articles without permission. OpenAI argues access to reporters' materials, including interview notes and records, is essential for their defense, particularly regarding fair use claims. They contend simply having access to published works isn't enough to determine if ChatGPT copied "original works of authorship." The Times vehemently opposes this, calling OpenAI's request "unprecedented" and unnecessary for copyright infringement claims. They argue the focus should be on the published works themselves, not their internal newsgathering process. This case highlights the evolving landscape of fair use in AI training. OpenAI's data scraping approach, once commonplace, now faces increased scrutiny from rightsholders seeking to monetize their content. The Times' lawsuit and Reddit's recent data-scraping restrictions demonstrate a shift towards stricter data access regulations within the AI development community. #AI #Copyright #NYT #OpenAI #ChatGPT
Показать все...
Compel – #152 in The New York Times Company v. Microsoft Corporation (S.D.N.Y., 1:23-cv-11195) – CourtListener.com

LETTER MOTION to Compel The New York Times to Produce Documents addressed to Judge Sidney H. Stein from Elana Nightingale Dawson dated July 1, 2024. Document filed by OAI Corporation, LLC, OpenAI GP, LLC, OpenAI Global LLC, OpenAI Holdings, LLC, OpenAI LLC, OpenAI LP, OpenAI OpCo LLC, OpenAI, Inc.. (Attachments: # 1 Exhibit 1 - Plaintiff's Responses Excerpts).(Nightingale Dawson, Elana) (Entered: 07/01/2024)

Key Institutions Enforcing the EU AI Act Freshfields Bruckhaus Deringer has outlined the main institutions responsible for enforcing the EU AI Act. At the EU level, the European Commission, through the AI Office, has exclusive powers over general-purpose AI (GPAI), overseeing compliance, coordinating cross-border investigations, and overruling national decisions when necessary. National market surveillance authorities manage other aspects, with extensive powers for surveillance, investigation, and enforcement. The European Artificial Intelligence Board advises on consistent application, supported by a scientific panel that issues alerts on systemic risks. The European Data Protection Supervisor also plays a role in enforcement. #AIAct #AICompliance
Показать все...
EU AI Act unpacked #9: Who are the regulators to enforce the AI Act?

While we have already explored different obligations that apply under the EU AI Act (AI Act) in previous posts of our blog series, we will now take a de...

👍 2
World Religions Commit to AI Ethics in Hiroshima Religious leaders from around the world have gathered in Hiroshima to sign the "Rome Call for AI Ethics," emphasizing ethical AI development for peace. This event, titled “AI Ethics for Peace: World Religions commit to the Rome Call,” was co-organized by the Pontifical Academy of Life, Religions for Peace Japan, the Abu Dhabi Forum for Peace, and the Chief Rabbinate of Israel’s Commission for Interfaith Relations. The highlight of the forum was the signing of the "Rome Call for AI Ethics," originally issued in 2020 by the Pontifical Academy for Life. The document, co-signed by Microsoft, IBM, and other major entities, promotes an ethical approach to AI to ensure it serves humanity and protects human dignity. #AI #AIEthics
Показать все...
🔥 4
Japan Ministry Introduces First AI Policy Japan's Defense Ministry has released its inaugural policy on the use of AI in military applications, aiming to address recruitment challenges and stay competitive in defense technology. The policy outlines seven priority areas for AI deployment, including target detection, intelligence analysis, and unmanned systems. The strategy emphasizes human control over AI systems and rules out fully autonomous lethal weapons. Japan’s policy could set a precedent for responsible AI use in military applications, influencing global approaches to the AI arms race. #AI #AIPolicy #EthicalAI
Показать все...
Japan’s Defense Ministry unveils first basic policy on use of AI

The new policy comes as Japan looks to stave off a manpower shortage and keep pace with China and the U.S. on the technology’s military applications.

👍 4
China Promotes Global AI Cooperation With New Shanghai Declaration China is advancing global AI cooperation through the Shanghai Declaration on Global AI Governance, a five-point pledge aimed at fostering open AI development and international collaboration. Announced at the World Artificial Intelligence Conference (WAIC) in Shanghai, the declaration emphasizes AI research and development across various sectors, including healthcare, transportation, and agriculture. The declaration includes commitments to AI safety, preventing the use of AI for public opinion manipulation and disinformation, and promoting the transfer of AI technologies under the principles of openness and shared benefit. It also addresses the mitigation of AI's impact on employment. Chen Jining, party secretary for Shanghai, called for a collaborative effort from governments, the scientific community, and industry to ensure AI benefits humanity. Chinese Premier Li Qiang reiterated the importance of AI security and governance, expressing China's willingness to work with other nations to enhance global AI development. The declaration builds on China's existing AI governance initiatives, such as supporting the U.S.-led UN resolution on trustworthy AI and the Bletchley Declaration on AI Safety. #AI #ShanghaiDeclaration #AIGovernance
Показать все...
👍 3
European Commission's AI Codes of Practice: A Self-Regulation Concern? According to Euractiv, the European Commission plans to let AI model providers draft codes of practice for compliance with the AI Act, with civil society organizations consulted during the process. This approach has sparked concerns about industry self-regulation, as these codes will serve as compliance measures for general-purpose AI models until harmonized standards are set. The Commission may grant EU-wide validity to these codes through an implementing act. Some civil society members worry this could enable Big Tech to essentially write their own rules. The AI Act's language on stakeholder participation in drafting these codes is ambiguous. The Commission has stated that an upcoming call for expressions of interest will clarify how various stakeholders, including civil society, will be involved. However, specifics are still lacking. An external firm will be hired to manage the drafting process, including stakeholder engagement and weekly working group meetings. The AI Office will oversee the process but will primarily focus on approving the final codes. #AIRegulation #EUCommission #AICodes #AIAct #Compliance
Показать все...
Inside the EU Commission’s rush to build codes of practice for general purpose AI

Providers of general purpose AI, like ChatGPT, will be in the driver's seat when drafting codes of practice that they can later use to demonstrate compliance with the AI Act.

👍 2
German Court Allows Patents for AI-Generated Inventions Germany's highest civil court, the Bundesgerichtshof, has ruled that inventions generated by artificial intelligence can be patented. This decision (english translation here) resolves a previous split between German federal appellate courts. The case is part of the Artificial Inventor Project, a global initiative led by Professor Ryan Abbott from the University of Surrey, aiming to patent AI-generated output. This ruling contrasts with decisions in other jurisdictions, such as the United States, where a natural person must contribute substantially to an invention for it to be patentable. Earlier this year, the UK Supreme Court ruled that AI-generated inventions are inherently unprotectable. In Germany, prior court decisions had both overturned and upheld the German patent office's rejection of patent applications for AI-generated inventions. The patent at the center of this case is for a food container designed using fractal geometry, created by an AI named DABUS ("device for the autonomous bootstrapping of unified sentience"). #AIPatents #AI
Показать все...

👍 2
Civil Society Calls for Independent AI Regulators A coalition of over 30 civil society organizations, including BEUC, has urged the European Commission to ensure the independence of national authorities enforcing the AI Act. In an open letter, the organizations highlighted concerns over the appointments in Denmark and Italy as examples of questionable practices. They stressed the need for clear guidelines to member states to maintain the integrity and impartiality of AI regulation. #AIRegulation #AIAct
Показать все...

👍 4
UK: Think Tank Calls for AI Incident Reporting System The Centre for Long-Term Resilience (CLTR) has highlighted a critical gap in UK AI regulation, calling for a comprehensive incident reporting system. According to CLTR, over 10,000 safety incidents involving AI systems have been recorded since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents are likely to increase. CLTR argues that a robust incident reporting system is essential for effective AI regulation, akin to those in safety-critical industries like aviation and medicine. This view is supported by experts and governments, including the US, China, and the EU. The proposed system aims to: ✅ Monitor how AI is causing safety risks in real-world contexts, providing a feedback loop that can allow course correction in how AI is regulated and deployed; ✅ Coordinate responses to major incidents where speed is critical, followed by investigations into root causes to generate cross-sectoral learnings; ✅ Identify early warnings of larger-scale harms that could arise in future, for use by the AI Safety Institute and Central AI Risk Function in risk assessments. #AI
Показать все...
AI incident reporting: Addressing a gap in the UK’s regulation of AI

by Tommy Shaffer ShaneRead the full policy paper here:Executive summaryAI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. With greater integration of AI into society, incidents are likely to increase in number and scale of impact.In other safety-critical industries, such as aviation and medicine, incidents like these are collected and investigated by authorities in a process known as ‘incident reporting

👍 3
U.S. District Court in North Carolina Bans Use of AI in Legal Research The U.S. District Court for the Western District of North Carolina has issued a standing order prohibiting attorneys from using generative artificial intelligence for legal research. Lawyers must now file a certification with every brief, confirming that no AI was used in the research process and that all statements and citations were personally verified by the attorney or paralegal. This measure aims to ensure the accuracy and integrity of legal documents submitted to the court. #LegalTech #AI #ArtificialIntelligence #LegalResearch
Показать все...

👍 3
Выберите другой тариф

Ваш текущий тарифный план позволяет посмотреть аналитику только 5 каналов. Чтобы получить больше, выберите другой план.