DL in NLP
Новости и обзоры статей на тему обработки естественного языка, нейросетей и всего такого. Связь: @dropout05 (рекламы нет)
إظهار المزيد- المشتركون
- التغطية البريدية
- ER - نسبة المشاركة
جاري تحميل البيانات...
جاري تحميل البيانات...
#1X #humanoid #embodiedai You can now tell EVE to do multiple autonomous tasks back-to-back. Watch a team of EVEs work together to clean up our office. In this video, you see the start of 1X's development of an advanced AI system that chains simple tasks into complex actions using voice commands, allowing seamless multi-robot control and remote operation. By starting with single-task models, we ensure smooth transitions to more powerful unified models, ultimately aiming to automate high-level actions using AI. This video does not contain teleoperation, computer graphics, cuts, video speedups, or scripted trajectory playback. It's all controlled via neural networks. Learn more here: www.1x.tech/discover/ai-update-voice-commands-chaining-tasks About 1X: 1X is an AI robotics company that develops safe, intelligent humanoid robots designed to work alongside humans. Founded in 2014, 1X is headquartered in both San Francisco Bay and Norway. Connect with 1X Website: www.1x.tech X:
https://x.com/1x_techLinkedIn:
https://www.linkedin.com/company/1x-technologies/Instagram:
https://www.instagram.com/1x.technologies/April 11, 2024 Speakers: Jason Wei & Hyung Won Chung, OpenAI Intuitions on Language Models (Jason) Jason will talk about some basic intuitions on language models, inspired by manual examination of data. First, he will discuss how one can view next word prediction as massive multi-task learning. Then, he will discuss how this framing reconciles scaling laws with emergent individual tasks. Finally, he will talk about the more general implications of these learnings. Slides here:
https://docs.google.com/presentation/d/1JKpqsbkr5Fg-bj1iElPaC-ToTVpRmRLKZmN89krwl04/edit?usp=sharing&resourcekey=0-VPgp_Yc4krPPW3Mxv6UjgQShaping the Future of AI from the History of Transformer (Hyung Won) Hyung Won: AI is developing at such an overwhelming pace that it is hard to keep up. Instead of spending all our energy catching up with the latest development, I argue that we should study the change itself. First step is to identify and understand the driving force behind the change. For AI, it is the exponentially cheaper compute and associated scaling. I will provide a highly-opinionated view on the early history of Transformer architectures, focusing on what motivated each development and how each became less relevant with more compute. This analysis will help us connect the past and present in a unified perspective, which in turn makes it more manageable to project where the field is heading. Slides here:
https://docs.google.com/presentation/d/1u05yQQaw4QXLVYGLI6o3YoFHv6eC3YN8GvWD8JMumpE/edit?usp=sharingAbout the speakers: Jason Wei is an AI researcher based in San Francisco. He is currently working at OpenAI. He was previously a research scientist at Google Brain, where he popularized key ideas in large language models such as chain-of-thought prompting, instruction tuning, and emergent phenomena. Hyung Won Chung is a research scientist at OpenAI ChatGPT team. He has worked on various aspects of Large Language Models: pre-training, instruction fine-tuning, reinforcement learning with human feedback, reasoning, multilinguality, parallelism strategies, etc. Some of the notable work includes scaling Flan paper (Flan-T5, Flan-PaLM) and T5X, the training framework used to train the PaLM language model. Before OpenAI, he was at Google Brain and before that he received a PhD from MIT. More about the course can be found here:
https://web.stanford.edu/class/cs25/View the entire CS25 Transformers United playlist:
https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CMWe’re announcing GPT-4 Omni, our new flagship model which can reason across audio, vision, and text in real time.
Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task...
تسمح خطتك الحالية بتحليلات لما لا يزيد عن 5 قنوات. للحصول على المزيد، يُرجى اختيار خطة مختلفة.