cookie

Ми використовуємо файли cookie для покращення вашого досвіду перегляду. Натиснувши «Прийняти все», ви погоджуєтеся на використання файлів cookie.

avatar

AI & Law

Your go-to source for global AI Governance news. #AI Compliance #AIEthics Russian version https://t.me/ai_and_law_rus Contact @mmariuka

Більше
Рекламні дописи
436
Підписники
Немає даних24 години
-17 днів
+1630 днів

Триває завантаження даних...

Приріст підписників

Триває завантаження даних...

EU Council Adopts Regulation to Boost AI Development with Supercomputing Power The Council of the EU has officially adopted an amendment to the regulation on the European High-Performance Computing (EuroHPC) joint undertaking, paving the way for the creation of AI factories. With this regulation, the EuroHPC initiative will promote AI factories that include AI supercomputers, associated data centers, and specialized supercomputing services. These facilities will provide both public and private users with access, with specific conditions tailored for startups and SMEs. Host entities of AI factories will receive EU financial support, covering up to 50% of both acquisition and operating costs of AI supercomputers. The regulation will be published in the Official Journal of the European Union and will enter into force 20 days later, marking a significant step towards enhancing AI development and innovation across Europe. #AI #Supercomputing
Показати все...
Council adopts regulation on use of supercomputing in AI development

Council adopts amendment to the EuroHPC regulation as regards the use of supercomputing in artificial intelligence development.

👍 1
Corporate Leaders Skeptical About AI Policy Effectiveness, BRG Report Finds According to Berkeley Research Group's Global AI Regulation Report, only 36% of corporate leaders believe current and future AI policies will provide the necessary guardrails. This report, drawing from over 200 corporate leaders and executive-level lawyers worldwide, evaluates the current AI regulatory landscape and identifies key challenges and priorities for effective AI governance. The report highlights a significant gap in confidence regarding compliance readiness, with many organizations struggling to implement internal safeguards for responsible AI use. Notably, the retail and consumer goods sectors are particularly lagging in this aspect. Future AI policy priorities include data integrity, security, and accuracy, though opinions vary by region and industry. Executives and respondents from the technology and financial services sectors prioritize adaptability and transparency, while lawyers and those in retail favor enforceability and strictness. The report underscores the growing divergence between the US and EU on AI regulation, complicating the creation of broad, comprehensive guidelines. #AI #AIRegulation #Compliance #AIEthics
Показати все...
👍 3
Major Record Labels Sue AI Start-Ups for Copyright Violation The world’s biggest music labels, including Sony Music, Universal Music Group, and Warner Records, are suing AI start-ups Suno and Udio for alleged copyright infringement on an unprecedented scale. They claim that the software of these companies illegally copies music to generate similar works and are seeking $150,000 per violation. This lawsuit, announced by the Recording Industry Association of America, marks a significant challenge against AI firms' use of copyrighted material. The record labels argue that AI-generated songs like "Prancing Queen" are nearly indistinguishable from original tracks by bands like ABBA, threatening genuine human artistry and the entire music ecosystem. #AI #Copyright #IntellectualProperty
Показати все...

👍 2
IAPP Releases Comprehensive Report on AI Governance in Practice The International Association of Privacy Professionals (IAPP) has released a new report on AI Governance in Practice, providing key insights into the evolving field of AI governance. The report offers a foundational overview of AI, detailing its development and essential terminology. This approach allows anyone to grasp the basics and advance in the field. It includes a thorough inventory of AI risks, particularly data-centric ones, and offers practical strategies for managing them. Additionally, the report cites leading resources, including laws, regulations, and frameworks like the NIST AI RMF, providing a strong basis for deeper exploration. Moreover, the report highlights various industry examples to contextualize theoretical concepts. For those preparing for the AI governance exam, the report aligns well with the AIGP Body of Knowledge and covers numerous topics likely to be tested. #AI #AIGovernance #DataPrivacy #IAPP #AIGP
Показати все...
IAPP AI Governance Report 2024.pdf37.69 MB
👍 3
Appian CEO Challenges AI Industry to Prioritize Trust Matt Calkins, CEO of Appian, has called on the AI industry to prioritize responsible development and trust. At a critical moment for AI, Calkins unveiled guidelines promoting data transparency, user consent, and respect for intellectual property. "We must ensure AI flourishes by building trust," Calkins stated. His four principles include disclosing data sources, using private data with consent, anonymizing personally identifiable data, and compensating for copyrighted information. These steps aim to shift AI development from a data race to a trust race. As AI faces increasing scrutiny, Calkins positions Appian as a leader in responsible AI, encouraging others to join this movement. Trust, he argues, will unlock AI's full potential and redefine industry success. #AI #ResponsibleAI #DataPrivacy #Appian #TechLeadership
Показати все...
For AI to really succeed, we need to protect private data - Fast Company

An American AI model based on clear property rights and data privacy will inspire more participation than a Chinese AI model with its data controlled by the CCP.

👍 2
Clearview AI Agrees to Conditional Settlement in Privacy Lawsuit Clearview AI has reached a unique settlement agreement in a privacy lawsuit involving its data-scraping facial recognition technology. Unable to afford immediate compensation, Clearview AI will establish a fund representing 23% of the company's value as of last September. This fund will only be activated if the company undergoes an IPO or a significant event like a merger or asset sale. Based on Clearview's current valuation, this fund could be worth up to $51.7 million. The settlement, awaiting final court approval, also includes appointing a special master to demand cash from Clearview or sell settlement rights to third parties, with proceeds going to class members. Clearview AI has faced multiple lawsuits accusing it of privacy violations, leading to this creative resolution. The company, burdened by mounting legal costs, agreed to this settlement to avoid bankruptcy and provide potential relief to affected individuals. #Privacy #AI #ClearviewAI #FacialRecognition #LegalTech
Показати все...

👍 2👏 1
Indiana Officer Resigns After Misusing Clearview AI An Indiana police officer has resigned after it was discovered he frequently misused Clearview AI’s facial recognition technology to track social media users not linked to any crimes. According to the Evansville Police Department, the officer disguised personal searches by using actual case numbers associated with real incidents. An audit revealed an unusual high usage of Clearview AI by the officer, who primarily searched social media images rather than live or CCTV footage typically used in investigations. The department recommended termination, but the officer resigned before a final determination could be made. This incident highlights significant concerns about the misuse of facial recognition technology and underscores the need for stricter oversight and compliance measures to prevent abuse. #AI #Privacy #FacialRecognition #ClearviewAI
Показати все...
👍 2
Singapore Unveils Comprehensive Framework for Governing Generative AI On May 30, 2024, Singapore released the Model AI Governance Framework for Generative AI, a collaborative effort by the Infocomm Media Development Authority (IMDA) and the AI Verify Foundation. This framework addresses the unique challenges posed by generative AI and outlines nine key dimensions essential for its governance. The framework emphasizes the principles of accountability, transparency, fairness, robustness, and security. It calls for collaboration between policymakers, industry stakeholders, researchers, and like-minded jurisdictions to effectively manage AI governance. Among its key proposals are measures for data integrity, trusted development, security, incident reporting, and third-party testing. The framework aims to provide a foundation for global dialogue and effective policy approaches to ensure AI's safe and ethical use while fostering innovation. #AIGovernance #GenerativeAI #Singapore
Показати все...

👍 3
Research Group Demands Global Shutdown of Foundation Model Development The Machine Intelligence Research Institute (MIRI) calls for a global halt on the development of foundation models, fearing they could "destroy humanity" without proper safeguards. Foundation models, capable of a broad range of applications, may evolve to be smarter than humans. MIRI urges a complete shutdown of attempts to build any system smarter than a human. This extends beyond the previous calls by tech leaders like Elon Musk and Steve Wozniak, who sought a pause on models more powerful than OpenAI’s GPT-4. MIRI stresses the need for urgent and sweeping legislation, including an "off switch" for AI systems to prevent malevolent behaviors. The group emphasizes the importance of addressing AI existential risks seriously and ensuring safe AI development in the future. #AI #ArtificialIntelligence #AIEthics #FoundationModels #MIRI
Показати все...
Blog - Machine Intelligence Research Institute

👍 3
US: 5th Circuit Drops Plans for AI Regulation Rule After Lawyer Opposition The 5th U.S. Circuit Court of Appeals in New Orleans has decided against adopting a rule to regulate the use of generative AI by lawyers. The proposed rule, which would have been a first at the appellate level, aimed to ensure that AI-generated filings were accurate and verified. Public comments from lawyers, which were largely negative, influenced the court's decision. The proposed rule required lawyers using AI tools, like OpenAI's ChatGPT, to certify the accuracy of citations and legal analysis. Non-compliance could lead to sanctions and the striking of filings. Despite the rejection, the court emphasized that current rules already mandate truthfulness and accuracy in filings, and AI use will not be an excuse for any violations. #AI #LegalTech #GenerativeAI #LegalEthics
Показати все...

👍 3
Оберіть інший тариф

На вашому тарифі доступна аналітика тільки для 5 каналів. Щоб отримати більше — оберіть інший тариф.