cookie

Sizning foydalanuvchi tajribangizni yaxshilash uchun cookie-lardan foydalanamiz. Barchasini qabul qiling», bosing, cookie-lardan foydalanilishiga rozilik bildirishingiz talab qilinadi.

avatar

Computer Science and Programming

Channel specialized for advanced topics of: * Artificial intelligence, * Machine Learning, * Deep Learning, * Computer Vision, * Data Science * Python For Ads: @otchebuch & @cobbl, https://telega.io/c/computer_science_and_programming

Ko'proq ko'rsatish
Mamlakat ko'rsatilmaganИнглиз1 462Texnologiyalar & Aralashmalar315
Advertising posts
155 923Obunachilar
-624 soatlar
+1137 kunlar
+56230 kunlar

Ma'lumot yuklanmoqda...

Обуначиларнинг ўсиш даражаси

Ma'lumot yuklanmoqda...

Hammasini ko'rsatish...
👍 12
#promo
Hammasini ko'rsatish...
Who's here?  We've asked for a free link to a paid channel, for our subs. x2-x3 Signals here 👉 CLICK HERE TO JOIN 👈 👉 CLICK HERE TO JOIN 👈 👉 CLICK HERE TO JOIN 👈 ❗️JOIN FAST! FIRST 1000 SUBS WILL BE ACCEPTED
Hammasini ko'rsatish...
👎 11👍 5
𝗛𝗼𝘄 𝘁𝗼 𝗰𝗼𝗱𝗲 𝘄𝗶𝘁𝗵 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗽𝗶𝗹𝗼𝘁? A recent study by GitHub and Microsoft discovered that AI now authors 46% of new code. They also found that overall developer productivity surged by 55%, leading to more efficient coding processes. When we talk about AI-powered coding, we mainly talk about GitHub Copilot. But 𝗵𝗼𝘄 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗽𝗶𝗹𝗼𝘁 𝘄𝗼𝗿𝗸𝘀? The process goes in the following steps: 𝟭. 𝗦𝗲𝗰𝘂𝗿𝗲 𝗽𝗿𝗼𝗺𝗽𝘁 𝘁𝗿𝗮𝗻𝘀𝗺𝗶𝘀𝘀𝗶𝗼𝗻: Your prompts are securely sent to Copilot, ensuring data privacy. 𝟮. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴: Copilot analyzes the code around your cursor, the file type, and other open files to offer relevant suggestions. 𝟯. 𝗖𝗼𝗻𝘁𝗲𝗻𝘁 𝗳𝗶𝗹𝘁𝗲𝗿𝗶𝗻𝗴: It filters out personal data and inappropriate content, focusing solely on generating helpful code. 𝟰. 𝗖𝗼𝗱𝗲 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Based on the intent identified in your prompts, Copilot crafts code suggestions that align with your coding style and project standards. 𝟱. 𝗨𝘀𝗲𝗿 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻: Here, we can decide whether to use, tweak, or reject Copilot's suggestions. 𝟲. 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽: Copilot learns from your interactions, improving its suggestions. Every time you tweak or reject its ideas, he knows from it. It employs techniques like zero-shot (asking without examples), one-shot (asking with an example), and few-shot learning (providing multiple examples) to adapt to our instructions, whether you provide examples or not. 𝟳. 𝗣𝗿𝗼𝗺𝗽𝘁 𝗵𝗶𝘀𝘁𝗼𝗿𝘆 𝗿𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻: It remembers past prompts and interactions, making future suggestions more accurate.
Hammasini ko'rsatish...
👍 18👎 5
⚠ Message was hidden by channel owner
Hammasini ko'rsatish...
⚠ Message was hidden by channel owner
Hammasini ko'rsatish...
👍 24👎 2
⚠ Message was hidden by channel owner
Hammasini ko'rsatish...
👍 6👎 1
⚠ Message was hidden by channel owner
Hammasini ko'rsatish...
𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗗𝗶𝘀𝗰𝗼𝗿𝗱 𝗵𝗮𝗻𝗱𝗹𝗲 𝗮 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 𝗼𝗻𝗹𝗶𝗻𝗲 𝘂𝘀𝗲𝗿𝘀 𝗶𝗻 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝘀𝗲𝗿𝘃𝗲𝗿? As time passed, the overall size of Discord's user base, including its most prominent communities, has grown massively. This affected servers that started to slow down and hit their throughput limits. So, they needed to scale individual Discord servers from tens of thousands to millions of concurrent users. Whenever someone sends a message on Discord or joins a channel, they need to update the date UI of everyone online on that server. They call that server a "𝗴𝘂𝗶𝗹𝗱," which runs in a 𝘀𝗶𝗻𝗴𝗹𝗲 𝗘𝗹𝗶𝘅𝗶𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀, while there is another process (a "𝘀𝗲𝘀𝘀𝗶𝗼𝗻") for each connected client. The guild process tracks sessions of users who are members of that guild and are responsible for actions to those sessions. When sessions get updates, forward them to the web socket socket to the client. The main issue is that 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗺𝗲𝘀𝘀𝗮𝗴𝗲 𝗻𝗲𝗲𝗱𝘀 𝘁𝗼 𝗴𝗼 𝘁𝗼 𝘁𝗵𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗼𝗻𝗹𝗶𝗻𝗲 on that server, which means if a server has 1000 people online and they all send a message once, that's 1 million notifications.
Hammasini ko'rsatish...
👍 8👎 1
So, how they solved it? 𝗧𝗵𝗲𝘆 𝗳𝗶𝗿𝘀𝘁 𝘁𝗿𝗶𝗲𝗱 𝘁𝗼 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝘀. They tracked what Elixir processes were doing, if they were stuck waiting on something, etc. They recorded the event types, how many of each kind of message they received, and their processing times. In addition, they tried to understand how much memory they use, the performances of garbage collectors, etc. After the analysis, they 𝗰𝗿𝗲𝗮𝘁𝗲𝗱 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆: 𝟭. 𝗣𝗮𝘀𝘀𝗶𝘃𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀: Discord significantly reduced the amount of data processed and sent by differentiating between active and passive user connections, cutting the fanout work by 90% for large servers. 𝟮. 𝗥𝗲𝗹𝗮𝘆𝘀: Implementing a relay system (read - multithreading) allowed Discord to split the fanout process across multiple machines, enabling a single guild to utilize more resources and support more prominent communities. Relays maintain connections to the sessions instead of the guild and are responsible for doing fanout with permission checks. 𝟯. 𝗪𝗼𝗿𝗸𝗲𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 𝗮𝗻𝗱 𝗘𝗧𝗦: To maintain server responsiveness, Discord employed worker processes and Erlang Term Storage (ETS) for operations requiring iteration over large sets of members, thus avoiding bottlenecks in the guild process. ETS is an in-memory database that supports the ability of multiple Elixir processes to access it safely. This enables the creation of a new worker process and passes the ETS table so this process can run expensive operations and offload the central guild server. 🔗https://discord.com/blog/maxjourney-pushing-discords-limits-with-a-million-plus-online-users-in-a-single-server
Hammasini ko'rsatish...
👍 21