cookie

Мы используем файлы cookie для улучшения сервиса. Нажав кнопку «Принять все», вы соглашаетесь с использованием cookies.

avatar

Artificial Intelligence | AI News

🤖 The #1 AI news source on Telegram! Delve into the latest breakthroughs and emerging trends. Buy Ads: @Liang_Zhuge or https://telega.io/c/deep_machine_learning_future

Больше
Рекламные посты
106 213
Подписчики
-51224 часа
+3 0127 дней
+16 66830 дней

Загрузка данных...

Прирост подписчиков

Загрузка данных...

01:24
Видео недоступноПоказать в Telegram
France-based Pollen Robotics has released a new video showing two teleoperated robots teaming up for manipulation tasks. The company is aiming for collaborative development of robots with open-source hardware and software platforms. Source | Artificial intelligence 🤖
Показать все...
16.54 MB
👍 9🔥 2🤪 2 1
GPT5 tomorrow?
Показать все...
👀 67😁 14😐 8🤪 8🥴 3
Фото недоступноПоказать в Telegram
This is really funny — a former QuickBooks employee revealed a company secret. A few years ago, the company had an "AI" that allowed clients to automatically process receipts. It turned out that under the hood of this "AI" were dozens of Filipinos manually doing all the work. And when the "neural network" was slow to process requests, it was simply because the poor guys were asleep. Source | Artificial intelligence 🤖
Показать все...
🤪 41👍 10🫡 8 2🙏 2
01:28
Видео недоступноПоказать в Telegram
Is this why they created neural networks? This guy is making gangsta rap using the voices of popular characters. Source | Artificial intelligence 🤖
Показать все...
zXsyqegAz3vvxhOx.mp418.63 MB
🔥 19👍 7😁 2 1
Employees of OpenAI and Google DeepMind warn that AI could destroy humanity💀 Former and current employees of OpenAI and Google DeepMind have signed an open letter about the risks of artificial intelligence. They also called for protection from potential retaliation by companies. The risks range from deepening inequality to losing control over autonomous AI systems, which could potentially lead to the extinction of humanity. They propose several solutions to address these issues: 🟠 Reject agreements that suppress criticism; ⚫️ Offer employees a verifiable anonymous process to report AI problems; 🟠 Support a culture of open criticism so that the public, boards of directors, regulators, and others are timely informed about the risks of commercial AI models; ⚫️ Avoid retaliating against employees who publicly share confidential information related to risks. In total, the letter was signed by 13 people, including 7 former and 4 current OpenAI employees, and 1 former and 1 current Google DeepMind employee. The CEO and CTO of OpenAI claim there are no safety issues with AI products.🤷‍♀️ Source | Artificial intelligence 🤖
Показать все...
🥴 16👍 13 5🔥 3😁 1
02:04
Видео недоступноПоказать в Telegram
OpenAI released another video showcasing ChatGPT 4o new voice feature, and it's so wild! It can generate different character voices. Feel the AGI. Source | Artificial intelligence 🤖
Показать все...
22.86 MB
👍 31 7🥴 6😐 3👀 3
03:13
Видео недоступноПоказать в Telegram
The next wave of AI is Physical AI. At COMPUTEX 2024 in Taiwan, NVIDIA CEO Jensen Huang presented their comprehensive ecosystem for robotics developers and companies. Source | Artificial intelligence 🤖
Показать все...
37.98 MB
👍 25🔥 8 5🙏 2😐 1
Today, a lot of strange and interesting things are happening: — Right now, ChatGPT, Perplexity, Gemini, and Claude are (at least partially) down. — Leading AI researchers and former/current employees of OpenAI/DeepMind are signing an open letter stating that those working on AGI should have the freedom to express their opinions and criticize the company without risking their financial incentives. They argue that companies give out millions of dollars in shares but then say, "If you disagree with us, you'll lose everything!" — Recently fired OpenAI employee Leopold Aschenbrenner, who worked closely with Ilya Sutskever on his team, has published a 150+ page paper. It covers everything from scaling laws and predictions of model development to Alignment issues and the behavior of leading labs as they approach AGI. Read it here: (http://situational-awareness.ai). If you have half an hour and really want to understand why people are saying that models will genuinely become smarter by 2027-2030, start with the first two chapters. — Along with this, a 4-hour interview with Dwarkesh, whom I've recommended before, has been released. We'll watch it in parts, and I'll try to write about interesting points. Covered topics include: 1) The race to a $1 trillion power cluster 2) What will happen in 2028 3) What happened at OpenAI (though I don't think any new details will be revealed) 4) China's espionage in AGI labs Source | Artificial intelligence 🤖
Показать все...
Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project. Read the new essay series from Leopold this episode is based on here:

https://situational-awareness.ai/

Timestamps 00:00:00 The trillion-dollar cluster and unhobbling 00:21:20 AI 2028: The return of history 00:41:15 Espionage & American AI superiority 01:09:09 Geopolitical implications of AI 01:32:12 State-led vs. private-led AI 02:13:12 Becoming Valedictorian of Columbia at 19 02:31:24 What happened at OpenAI 02:46:00 Intelligence explosion 03:26:47 Alignment 03:42:15 On Germany, and understanding foreign perspectives 03:57:53 Dwarkesh's immigration story and path to the podcast 04:03:16 Random questions 04:08:47 Launching an AGI hedge fund 04:20:03 Lessons from WWII 04:29:57 Coda: Frederick the Great Links Transcript:

https://www.dwarkeshpatel.com/p/leopold-aschenbrenner

Apple Podcasts:

https://podcasts.apple.com/us/podcast/leopold-aschenbrenner-china-us-super-intelligence-race/id1516093381?i=1000657821539

Spotify:

https://open.spotify.com/episode/5NQFPblNw8ewxKolIDpiYN?si=6NaTHAugT2SxZrspW3lziw

Follow me on Twitter:

https://twitter.com/dwarkesh_sp

Follow Leopold on Twitter:

https://x.com/leopoldasch

28👍 11🔥 3😁 1🙏 1
00:40
Видео недоступноПоказать в Telegram
⚡️Nvidia Introduces G-Assist G-Assist is an AI assistant for gamers that can guide players in computer games and optimize settings. 🤖 The assistant can respond to voice commands, understand game situations, and optimize PC settings for better performance. 🌟 Microsoft also has a similar AI assistant that helps Minecraft players in the game. Source | Artificial intelligence 🤖
Показать все...
IMG_3587.MP414.11 MB
🤔 9 6👍 5🔥 5🥴 1
Фото недоступноПоказать в Telegram
Intel has unveiled Lunar Lake — an AI chip for laptops from all major PC manufacturers. ✅ Lunar Lake will feature 16 and 32 GB of LPDDR5X memory built into the package, reducing power consumption. ✅ Lunar Lake will have 8 cores and higher performance than previous models, delivering up to 48 TOPS. The performance exceeds Microsoft's requirements for running "Copilot + PC." ✅ Intel claims that Lunar Lake will perform 20 iterations of Stable Diffusion in just 5.8 seconds locally on your device. ✅ Intel states that a large wave of Lunar Lake laptops will appear later this year, with 80 different designs from 20 hardware partners, including all major PC manufacturers, available at launch. Source | Artificial intelligence 🤖
Показать все...
👍 19 7🔥 5