cookie

Мы используем файлы cookie для улучшения сервиса. Нажав кнопку «Принять все», вы соглашаетесь с использованием cookies.

Рекламные посты
324
Подписчики
Нет данных24 часа
Нет данных7 дней
+330 дней

Загрузка данных...

Прирост подписчиков

Загрузка данных...

"We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar." https://youtu.be/l8pRSuU81PU?si=K8AXVCTiVwtS2O29
Показать все...
Let's reproduce GPT-2 (124M)

We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar. Links: - build-nanogpt GitHub repo, with all the changes in this video as individual commits:

https://github.com/karpathy/build-nanogpt

- nanoGPT repo:

https://github.com/karpathy/nanoGPT

- llm.c repo:

https://github.com/karpathy/llm.c

- my website:

https://karpathy.ai

- my twitter:

https://twitter.com/karpathy

- our Discord channel:

https://discord.gg/3zy8kqD9Cp

Supplementary links: - Attention is All You Need paper:

https://arxiv.org/abs/1706.03762

- OpenAI GPT-3 paper:

https://arxiv.org/abs/2005.14165

- OpenAI GPT-2 paper:

https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf-

The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to:

https://lambdalabs.com

Chapters: 00:00:00 intro: Let’s reproduce GPT-2 (124M) 00:03:39 exploring the GPT-2 (124M) OpenAI checkpoint 00:13:47 SECTION 1: implementing the GPT-2 nn.Module 00:28:08 loading the huggingface/GPT-2 parameters 00:31:00 implementing the forward pass to get logits 00:33:31 sampling init, prefix tokens, tokenization 00:37:02 sampling loop 00:41:47 sample, auto-detect the device 00:45:50 let’s train: data batches (B,T) → logits (B,T,C) 00:52:53 cross entropy loss 00:56:42 optimization loop: overfit a single batch 01:02:00 data loader lite 01:06:14 parameter sharing wte and lm_head 01:13:47 model initialization: std 0.02, residual init 01:22:18 SECTION 2: Let’s make it fast. GPUs, mixed precision, 1000ms 01:28:14 Tensor Cores, timing the code, TF32 precision, 333ms 01:39:38 float16, gradient scalers, bfloat16, 300ms 01:48:15 torch.compile, Python overhead, kernel fusion, 130ms 02:00:18 flash attention, 96ms 02:06:54 nice/ugly numbers. vocab size 50257 → 50304, 93ms 02:14:55 SECTION 3: hyperpamaters, AdamW, gradient clipping 02:21:06 learning rate scheduler: warmup + cosine decay 02:26:21 batch size schedule, weight decay, FusedAdamW, 90ms 02:34:09 gradient accumulation 02:46:52 distributed data parallel (DDP) 03:10:21 datasets used in GPT-2, GPT-3, FineWeb (EDU) 03:23:10 validation data split, validation loss, sampling revive 03:28:23 evaluation: HellaSwag, starting the run 03:43:05 SECTION 4: results in the morning! GPT-2, GPT-3 repro 03:56:21 shoutout to llm.c, equivalent but faster code in raw C/CUDA 03:59:39 summary, phew, build-nanogpt github repo Corrections: I will post all errata and followups to the build-nanogpt GitHub repo (link above) SuperThanks: I experimentally enabled them on my channel yesterday. Totally optional and only use if rich. All revenue goes to to supporting my work in AI + Education.

Показать все...
llama-recipes/recipes/use_cases/agents/langchain at main · meta-llama/llama-recipes

Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization an...

Показать все...

https://www.youtube.com/watch?v=UnsDyvxfvwo&t=13s Self-Supervised Anomaly Detection in Time Series: A Brief Introduction - YouTube
Показать все...
Self-Supervised Anomaly Detection in Time Series: A Brief Introduction

Self-Supervised Anomaly Detection in Time Series: A Brief Introduction Speaker: Aitor Sanchez Institution: ISG

https://www.youtube.com/watch?v=InTohA7Tsg4 Semi supervised Anomaly Detection - YouTube
Показать все...
Semi supervised Anomaly Detection

NCTU CS PhD Seminar Spring 2020 Speaker Sheng-Feng Yu 0786039

Показать все...
Gradient Flow - Borealis AI

This blog is the first in a series in which we consider machine learning from four different viewpoints.

https://towardsdatascience.com/real-time-time-series-anomaly-detection-981cf1e1ca13 Real-Time Time Series Anomaly Detection | by Marco Cerliani | Towards Data Science
Показать все...
Real-Time Time Series Anomaly Detection

Develop a Monitoring System on Multiple Time Series Sensors

https://www.youtube.com/watch?v=da5GNkAZO54 Bitcoin Price Prediction with FB Prophet | Time Series with Machine Learning - YouTube
Показать все...

https://www.youtube.com/watch?v=eq7_3HA7QQI #25 - Introduction to Time Series Forecasting with Prophet - YouTube
Показать все...

Показать все...
The BrAIn Roads (English subtitles)

The BrAIn Roads documents the thrilling research journey of a group of researchers of the Computer Vision Centre tasked with developing AI for autonomous driving on rural roads. Funded by the Generalitat de Catalunya, the project aims to offer an economical and sustainable solution to the lack of public transport in rural and isolated areas. The scientific team faces the complex challenge of training the AI on a narrow mountain road with poor visibility, adverse weather conditions, and no mobile connectivity – a proof of concept distinct from existing autonomous driving prototypes, mainly designed for urban areas. Taking us from the laboratories of the CVC to the field tests in the natural park of Alt Pirineu, the documentary explores the intricacies of artificial intelligence research. It provides insight into the successes, setbacks, and technological challenges faced by the team throughout the project. "The BrAIn Roads" offers a captivating perspective on the world of science and innovation, connecting the audience with the inspiring story of a research team dedicated to transforming autonomous driving and paving the way for a more accessible and sustainable mobility of the future. This documentary showcases the outcomes of the Advanced Digital Technologies project focused on autonomous driving in rural areas. The initiative was financially supported by the Generalitat de Catalunya and executed by the CVC, in collaboration with SEAT, Volkswagen, and the i2CAT Foundation. "The BrAIn Roads" is a production by Minifilms TV. More information: www.cvc.uab.es/thebrainroads