cookie

Utilizamos cookies para mejorar tu experiencia de navegación. Al hacer clic en "Aceptar todo", aceptas el uso de cookies.

avatar

Data Science by ODS.ai 🦜

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Mostrar más
Advertising posts
51 652Suscriptores
+3124 hours
+337 days
+4130 days

Carga de datos en curso...

Tasa de crecimiento de suscriptores

Carga de datos en curso...

Repost from Machinelearning
👑Llama 3 is here, with a brand new tokenizer! 🦙 Вышла Llama 3 Meta выпустила новую SOTA Llama 3 в двух версиях на 8B и 70B параметров. Длина контекста 8К, поддержка 30 языков.HF: https://huggingface.co/spaces/ysharma/Chat_with_Meta_llama3_8bBlog: https://ai.meta.com/blog/meta-llama-3/ Вы можете потестить 🦙 MetaLlama 3 70B и 🦙 Meta Llama 3 8B с помощью 🔥 бесплатного интерфейса: https://llama3.replicate.dev/ @ai_machinelearning_big_data
Mostrar todo...
🔥 17 7👍 6
⚡️Map-relative Pose Regression🔥(#CVPR2024 highlight) For years absolute pose regression did not work. There was some success by massively synthesising scene-specific data. We train scene-agnostic APR and it works. Paper: https://arxiv.org/abs/2404.09884 Page: https://nianticlabs.github.io/marepo @opendatascience
Mostrar todo...
👍 5🔥 5 3
🔥 ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback Proposes an approach that improves controllable generation by explicitly optimizing pixel-level cycle consistency proj: https://liming-ai.github.io/ControlNet_Plus_Plus/ abs: https://arxiv.org/abs/2404.07987 @opendatascience
Mostrar todo...
🔥 10👍 9 3
🥔 YaART: Yet Another ART Rendering Technology 💚 This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences using Reinforcement Learning from Human Feedback (RLHF). 💜 During the development of YaART, Yandex especially focus on the choices of the model and training dataset sizes, the aspects that were not systematically investigated for text-to-image cascaded diffusion models before. 💖 In particular, researchers comprehensively analyze how these choices affect both the efficiency of the training process and the quality of the generated images, which are highly important in practice. ▪Paper page - https://ya.ru/ai/art/paper-yaart-v1 ▪Arxiv - https://arxiv.org/abs/2404.05666 ▪Habr - https://habr.com/ru/companies/yandex/articles/805745/ @opendatascience
Mostrar todo...

Your creative AI assistant to generate ART from textual descriptions

👍 21 9🔥 8💩 4🍌 1🖕 1
⚡️ PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models Significantly improved finetuned perf by simply changing the initialization of LoRA's AB matrix from Gaussian/zero to principal components of W ▪Github: https://github.com/GraphPKU/PiSSAPaper: https://arxiv.org/abs/2404.02948 @opendatascience
Mostrar todo...
🔥 20😁 11👍 4🌭 3 2🍌 2
Repost from Machinelearning
⚡️ Awesome CVPR 2024 Papers, Workshops, Challenges, and Tutorials! На конференцию 2024 года по компьютерному зрению и распознаванию образов (CVPR) поступило 11 532 статей, из которых только 2 719 были приняты, что составляет около 23,6% от общего числа. Ниже приведен список лучших докладов, гайдов, статей, семинаров и датасетов с CVPR 2024. ▪Github @ai_machinelearning_big_data
Mostrar todo...
🔥 12👍 5 3
Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan A presentation by Yann Lecun on the #SOTA in #DL YouTube: https://www.youtube.com/watch?v=MiqLoAZFRSE Slides: Google Doc Paper: Open Review P.S. Stole the post from @chillhousetech
Mostrar todo...
Yann Lecun | Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan

Ding Shum Lecture 3/28/2024 Speaker: Yann Lecun, New York University & META Title: Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan Abstract: How could machines learn as efficiently as humans and animals? How could machines learn how the world works and acquire common sense? How could machines learn to reason and plan? Current AI architectures, such as Auto-Regressive Large Language Models fall short. I will propose a modular cognitive architecture that may constitute a path towards answering these questions. The centerpiece of the architecture is a predictive world model that allows the system to predict the consequences of its actions and to plan a sequence of actions that optimize a set of objectives. The objectives include guardrails that guarantee the system's controllability and safety. The world model employs a Hierarchical Joint Embedding Predictive Architecture (H-JEPA) trained with self-supervised learning. The JEPA learns abstract representations of the percepts that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here: 

https://openreview.net/forum?id=BZ5a1r-kVsf

👍 9 3🤡 3👎 1👏 1💩 1
Let’s get back to posting 😌
Mostrar todo...
🫡 18👍 3🥴 3 2
Position: Analyst/Researcher for AI Team at Cyber.fund About Cyber.fund: Cyber.fund is a pioneering $100mm research-driven fund specializing in the realm of web3, decentralized AI, autonomous agents, and self-sovereign identity. Our legacy is built upon being the architects behind monumental projects such as Lido, p2p.org, =nil; foundation, Neutron, NEON, and early investments in groundbreaking technologies like Solana, Ethereum, EigenLayer among 150+ others. We are committed to advancing the frontiers of Fully Homomorphic Encryption (FHE) for Machine Learning, privacy-first ML (Large Language Models), AI aggregations, and routing platforms alongside decentralized AI solutions. Who Are We Looking For? A dynamic individual who straddles the worlds of business acumen and academic rigor with: - A robust theoretical foundation in Computer Science and a must-have specialization in Machine Learning. - An educational background from a technical university, with a preference for PhD holders from prestigious institutions like MIT or МФТИ. - A track record of publications in the Machine Learning domain, ideally at the level of NeuroIPS. - Experience working in startups or major tech companies, ideally coupled with a background in angel investing. - A profound understanding of algorithms, techniques, and models in ML, with an exceptional ability to translate these into innovative products. - Fluent English, intellectual curiosity, and a fervent passion for keeping abreast of the latest developments in AI/ML. Responsibilities: 1) Investment Due Diligence: Conduct technical, product, and business analysis of potential AI/ML investments. This includes market analysis, engaging with founders and technical teams, and evaluating the scalability, reliability, risks, and limitations of products. 2) Portcos Support: Provide strategic and technical support to portfolio companies in AI/ML. Assist in crafting technological strategies, hiring, industry networking, identifying potential project challenges, and devising solutions. 3) Market and Technology Research: Stay at the forefront of ML/DL/AI trends (e.g., synthetic data, flash attention, 1bit LLM, FHE for ML, JEPA, etc.). Write publications, whitepapers, and potentially host X spaces/streams/podcasts on these subjects (in English). Identify promising companies and projects for investment opportunities. How to Apply? If you find yourself aligning with our requirements and are excited by the opportunity to contribute to our vision, please send your CV to [email protected]. Including a cover letter, links to publications, open-source contributions, and other achievements will be advantageous. Location: Location is flexible, but the candidate should be within the time zones ranging from EET to EST (Eastern Europe to the East Coast of the USA). This is not just a job opportunity; it's a call to be part of a visionary journey reshaping the landscape of AI and decentralized technology. Join us at Cyber.fund and be at the forefront of the technological revolution.
Mostrar todo...
🤡 23👍 11💩 6👎 5 3
LLM models are in their childhood years Source.
Mostrar todo...
👍 58🤨 9👎 2 2🥰 2🦄 2 1