cookie

Utilizamos cookies para mejorar tu experiencia de navegación. Al hacer clic en "Aceptar todo", aceptas el uso de cookies.

avatar

Vyper Squad ® ™

VYPER SQUAD OFFICIAL ™ ✖️ ML, Deeplearning ✖️ Artificial intelligence ✖️ Knowledge ✖️ Fun ✖️ Sᴏғᴛᴡᴀʀᴇ ✖️ Aᴘᴘs ✖️ Uᴅᴇᴍʏ Cᴏᴜʀsᴇs Buy ads: https://telega.io/c/+VZqZDu2c9NrJIIFJ Open for Cross/paid Promo Usᴇʀɴᴀᴍᴇ:- @Vyper_Squad Owner:-) @Hacker_Club2

Mostrar más
Publicaciones publicitarias
5 036
Suscriptores
-124 horas
+107 días
+19730 días
Distribuciones de tiempo de publicación

Carga de datos en curso...

Find out who reads your channel

This graph will show you who besides your subscribers reads your channel and learn about other sources of traffic.
Views Sources
Análisis de publicación
MensajesVistas
Acciones
Ver dinámicas
01
[100% Off] Complete SmartPhone Graphic Design - 3 in 1 Course Free Course Coupon Download Link || https://ift.tt/YwKOFgZ
500Loading...
02
So, how they solved it? 𝗧𝗵𝗲𝘆 𝗳𝗶𝗿𝘀𝘁 𝘁𝗿𝗶𝗲𝗱 𝘁𝗼 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝘀. They tracked what Elixir processes were doing, if they were stuck waiting on something, etc. They recorded the event types, how many of each kind of message they received, and their processing times. In addition, they tried to understand how much memory they use, the performances of garbage collectors, etc. After the analysis, they 𝗰𝗿𝗲𝗮𝘁𝗲𝗱 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆: 𝟭. 𝗣𝗮𝘀𝘀𝗶𝘃𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀: Discord significantly reduced the amount of data processed and sent by differentiating between active and passive user connections, cutting the fanout work by 90% for large servers. 𝟮. 𝗥𝗲𝗹𝗮𝘆𝘀: Implementing a relay system (read - multithreading) allowed Discord to split the fanout process across multiple machines, enabling a single guild to utilize more resources and support more prominent communities. Relays maintain connections to the sessions instead of the guild and are responsible for doing fanout with permission checks. 𝟯. 𝗪𝗼𝗿𝗸𝗲𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 𝗮𝗻𝗱 𝗘𝗧𝗦: To maintain server responsiveness, Discord employed worker processes and Erlang Term Storage (ETS) for operations requiring iteration over large sets of members, thus avoiding bottlenecks in the guild process. ETS is an in-memory database that supports the ability of multiple Elixir processes to access it safely. This enables the creation of a new worker process and passes the ETS table so this process can run expensive operations and offload the central guild server. 🔗https://discord.com/blog/maxjourney-pushing-discords-limits-with-a-million-plus-online-users-in-a-single-server
901Loading...
03
𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗗𝗶𝘀𝗰𝗼𝗿𝗱 𝗵𝗮𝗻𝗱𝗹𝗲 𝗮 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 𝗼𝗻𝗹𝗶𝗻𝗲 𝘂𝘀𝗲𝗿𝘀 𝗶𝗻 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝘀𝗲𝗿𝘃𝗲𝗿? As time passed, the overall size of Discord's user base, including its most prominent communities, has grown massively. This affected servers that started to slow down and hit their throughput limits. So, they needed to scale individual Discord servers from tens of thousands to millions of concurrent users. Whenever someone sends a message on Discord or joins a channel, they need to update the date UI of everyone online on that server. They call that server a "𝗴𝘂𝗶𝗹𝗱," which runs in a 𝘀𝗶𝗻𝗴𝗹𝗲 𝗘𝗹𝗶𝘅𝗶𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀, while there is another process (a "𝘀𝗲𝘀𝘀𝗶𝗼𝗻") for each connected client. The guild process tracks sessions of users who are members of that guild and are responsible for actions to those sessions. When sessions get updates, forward them to the web socket socket to the client. The main issue is that 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗺𝗲𝘀𝘀𝗮𝗴𝗲 𝗻𝗲𝗲𝗱𝘀 𝘁𝗼 𝗴𝗼 𝘁𝗼 𝘁𝗵𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗼𝗻𝗹𝗶𝗻𝗲 on that server, which means if a server has 1000 people online and they all send a message once, that's 1 million notifications.
2214Loading...
04
⁉️Fact: 🤔 Rejecting girls is a superpower.👀
910Loading...
05
❤️ Craving healthier relationships? Spice up your love life on our Telegram channel! Subscribe us for: 🔴Relationship Tips 🔴Inspiring Quotes 🔴Embarrassing Moments 🔴Date Tips 🔴Pickup Lines Single or taken, let’s turn up the heat on love together! 👩‍❤️‍👨💬 CLICK HERE TO JOIN
10Loading...
06
How old are you ?
1140Loading...
07
Anyone Who wants to Cross Promo with my Channel Dm Me Here @Hacker_club2
3930Loading...
08
𝟮𝟬 𝗦𝗤𝗟 𝗾𝘂𝗲𝗿𝘆 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 Below are the SQL query optimization techniques that I found to be significant, listed in the top 20: 1. Create an index on huge tables (>1.000.000) rows 2. Use EXIST() instead of COUNT() to find an element in the table 3. SELECT fields instead of using SELECT * 4. Avoid Subqueries in WHERE Clause 5. Avoid SELECT DISTINCT where possible 6. Use WHERE Clause instead of HAVING 7. Create joins with INNER JOIN (not WHERE) 8. Use LIMIT to sample query results 9. Use UNION ALL instead of UNION wherever possible 10. Use UNION where instead of WHERE ... or ... query. 11. Run your query during off-peak hours 12. Avoid using OR in join queries 14. Choose GROUP BY over window functions 15. Use derived and temporary tables 16. Drop the index before loading bulk data 16. Use materialized views instead of views 17. Avoid != or <> (not equal) operator 18. Minimize the number of subqueries 19. Use INNER join as little as possible when you can get the same output using LEFT/RIGHT join. 20. Frequently try to use temporary sources to retrieve the same dataset.
68915Loading...
09
https://www.highcpmgate.com/gc3y7m3vu?key=453de74b5408ae597e1d04e8d42f84d4
3951Loading...
10
𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗜𝘀 𝗡𝗼𝘁 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂 𝗠𝗲𝗮𝗻 𝗜𝘁 𝗜𝘀 In the recent interview with Scott Hanselman, 𝗥𝗼𝗯𝗲𝗿𝘁𝗮 𝗔𝗿𝗰𝗼𝘃𝗲𝗿𝗱𝗲, 𝗛𝗲𝗮𝗱 𝗢𝗳 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄, revealed the story about the architecture of Stack Overflow. They handle more than 6000 requests per second, 2 billion page views per month, and they manage to render a page in about 12 milliseconds. If we think about it a bit, we could imagine they use some kind of 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝗿𝘂𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗖𝗹𝗼𝘂𝗱 𝘄𝗶𝘁𝗵 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀. But the story is a bit different. Their solution is 15 years old, and it is a 𝗯𝗶𝗴 𝗺𝗼𝗻𝗼𝗹𝗶𝘁𝗵𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗼𝗻-𝗽𝗿𝗲𝗺𝗶𝘀𝗲𝘀. It is actually 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝗽𝗽 on IIS, which runs 200 sites. This single app is running on nine web servers and a single SQL Server (with the addition of one hot standby). They also use 𝘁𝘄𝗼 𝗹𝗲𝘃𝗲𝗹𝘀 𝗼𝗳 𝗰𝗮𝗰𝗵𝗲, one on SQL Server with large RAM (1.5TB), where they have 30% of DB access in RAM and also they use two Redis servers (master and replica). Besides this, they have 3 tag engine servers and 3 Elastic search servers, which are used for 34 million daily searches. All this is handled by a 𝘁𝗲𝗮𝗺 𝗼𝗳 𝟱𝟬 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀, who manage to 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝟰 𝗺𝗶𝗻𝘀 several times daily. Their 𝗳𝘂𝗹𝗹 𝘁𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸 is: 🔹 C# + ASP. NET MVC 🔹 Dapper ORM 🔹 StaeckExchange Redis 🔹 MiniProfiler 🔹 Jil JSON Deseliazier 🔹 Exceptional logger for SQL 🔹 Sigil, a .Net CIL generation helper (for when C# isn’t fast enough) 🔹 NetGain, a high-performance web socket server 🔹 Opserver, monitoring dashboard polling most systems and feeding from Orion, Bosun, or WMI. 🔹 Bosun, backend monitoring system, written in Go
4003Loading...
11
Hi, Guys Open Below Link Once to Support Us https://phonocheck.blogspot.com Thanks Guys for you Support leave a React
4492Loading...
12
#ad #promo
10Loading...
13
𝗛𝗼𝘄 𝗧𝗼 𝗘𝗻𝗮𝗯𝗹𝗲 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗣𝘂𝗹𝗹 𝗥𝗲𝗾𝘂𝗲𝘀𝘁𝘀? With Pull Requests, we lost the ability to have a proper Continuous Integration (CI) process in a way that delayed integration due to code reviews. So here comes a “Ship/Show/Ask” branching strategy. The thing is that not all pull requests need code reviews. So, whenever we make a change, we have three options: 🔹 𝗦𝗵𝗶𝗽 - Small changes that don’t need people’s review can be pushed directly to the main branch. We have some build pipelines running on the main brunch, which run tests and other checks, so it is a safety net for our changes. Some examples are: fixing a typo, increasing the minor dependency version, updated documentation. 🔹 𝗦𝗵𝗼𝘄 - Here, we want to show what has been done. When you have a branch, you open a Pull Request and merge it without a review. Yet, you still want people to be notified of the change (to review it later), but don’t expect essential discussions. Some examples are: a local refactoring, fixing a bug, added a test case. 🔹 𝗔𝘀𝗸 - Here, we make our changes and open a Pull Request while waiting for feedback. We do this because we want a proper review in case we need clarification on our approach. This is a classical way of making Pull Requests. Some examples are: Adding a new feature, major refactoring, and proof of concept.
6644Loading...
14
💋💋Hentai Every Day Check This site for All Types of hentai and manga 🙈🙈 Link:- Here
2111Loading...
15
#ad #promo
2135Loading...
16
🤗 ᴡᴇʟᴄᴏᴍᴇ 👋 ᴛᴏ ᴏᴜʀ ǫᴜᴏᴛᴇs ᴄʜᴀɴɴᴇʟ, ᴡʜᴇʀᴇ ɪɴsᴘɪʀᴀᴛɪᴏɴ ᴀɴᴅ ᴡɪsᴅᴏᴍ ғʟᴏᴡ ғʀᴇᴇʟʏ  !       ❝ 𝗤ᴜᴏᴛᴇ's 𝗪🌍ʀʟᴅ ™  ❞ 📌 ᴊᴏɪɴ ᴜs ᴏɴ ᴀ ᴊᴏᴜʀɴᴇʏ ᴏғ sᴇʟғ ᴅɪsᴄᴏᴠᴇʀʏ & ᴍᴏᴛɪᴠᴀᴛɪᴏɴ . ᴀs ᴡᴇ sʜᴀʀᴇ ᴘᴏᴡᴇʀғᴜʟ ᴡᴏʀᴅs ᴛʜᴀᴛ ᴜᴘʟɪғᴛ ᴀɴᴅ ɪɴsᴘɪʀᴇ . 💗 ʟᴇᴛs ᴄᴏɴɴᴇᴄᴛ ᴛʜʀᴏᴜɢʜ  ᴛʜᴇ ʙᴇᴀᴜᴛʏ ᴏғ ᴡᴏʀᴅs ᴀɴᴅ sᴘʀᴇᴀᴅ +ᴠɪᴛʏ ᴛᴏɢᴇᴛʜᴇʀ. ✨ 🔗 ᴊᴏɪɴ ᴜs ᴀɴᴅ ʟᴇᴛ ᴛʜᴇ ᴍᴀɢɪᴄ ᴏғ ǫᴜᴏᴛᴇs ᴛʀᴀɴsғᴏʀᴍ ᴜʀ ʟɪғᴇ !!
10Loading...
17
Free Courses with Certificate to learn data science, machine Learning and AI 👇👇 https://t.me/free4unow_backup Get free access to our paid premium channels today 👇👇 https://t.me/addlist/ID95piZJZa0wYzk5
1051Loading...
18
https://t.me/codingfreebooks
10Loading...
19
Media files
8260Loading...
20
Microsoft open-sourced MS-DOS 4.0. https://github.com/microsoft/MS-DOS Info - https://en.wikipedia.org/wiki/MS-DOS
8084Loading...
21
Media files
7640Loading...
22
Free Linux, DevOps cheatsheets and infographics✅ https://thatstraw.gumroad.com/l/cheatsheets
95410Loading...
23
Encryption and Decryption using Linear Algebra with C++ This project implements a text encryption and decryption system using a matrix-based encryption technique. This project serves as an educational and practical exploration of matrix-based encryption techniques, demonstrating the fundamental concepts of encryption and decryption in a user-friendly manner. 💻https://github.com/farukalpay/TextEncryptionWithLinearAlgebra
1 04110Loading...
24
.
10Loading...
25
𝗛𝗼𝘄 𝘁𝗼 𝗱𝗼 𝗰𝗼𝗱𝗲 𝗿𝗲𝘃𝗶𝗲𝘄𝘀 𝗽𝗿𝗼𝗽𝗲𝗿𝗹𝘆 An essential step in the software development lifecycle is code review. It enables developers to enhance code quality significantly. It resembles the authoring of a book. The author writes the story, which is then edited to ensure no mistakes like mixing up "you're" with "yours." Code review in this context refers to examining and assessing other people's code. There are different 𝗯𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝗮 𝗰𝗼𝗱𝗲 𝗿𝗲𝘃𝗶𝗲𝘄: it ensures consistency in design and implementation, optimizes code for better performance, is an opportunity to learn, and knowledge sharing and mentoring, as well as promotes team cohesion. What should you look for in a code review? Try to look for things such as: 🔹 𝗗𝗲𝘀𝗶𝗴𝗻 (does this integrate well with the rest of the system, and are interactions of different components make sense) 🔹 F𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘁𝘆 (does this change is what the developer intended) 🔹 C𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 (is this code more complex than it should be) 🔹 𝗡𝗮𝗺𝗶𝗻𝗴 (is naming good?) 🔹 𝗘𝗻𝗴. 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 (solid, kiss, dry) 🔹 𝗧𝗲𝘀𝘁𝘀 (are different kinds of tests used appropriately, code coverage), 🔹 𝗦𝘁𝘆𝗹𝗲 (does it follow style guidelines), 🔹 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻, etc.
1 1239Loading...
26
Media files
9500Loading...
27
Only Two People Unmuted our channel😱😱
1 1441Loading...
28
🤗 ᴡᴇʟᴄᴏᴍᴇ 👋 ᴛᴏ ᴏᴜʀ ǫᴜᴏᴛᴇs ᴄʜᴀɴɴᴇʟ, ᴡʜᴇʀᴇ ɪɴsᴘɪʀᴀᴛɪᴏɴ ᴀɴᴅ ᴡɪsᴅᴏᴍ ғʟᴏᴡ ғʀᴇᴇʟʏ  !       ❝ 𝗤ᴜᴏᴛᴇ's 𝗪🌍ʀʟᴅ ™  ❞ 📌 ᴊᴏɪɴ ᴜs ᴏɴ ᴀ ᴊᴏᴜʀɴᴇʏ ᴏғ sᴇʟғ ᴅɪsᴄᴏᴠᴇʀʏ & ᴍᴏᴛɪᴠᴀᴛɪᴏɴ . ᴀs ᴡᴇ sʜᴀʀᴇ ᴘᴏᴡᴇʀғᴜʟ ᴡᴏʀᴅs ᴛʜᴀᴛ ᴜᴘʟɪғᴛ ᴀɴᴅ ɪɴsᴘɪʀᴇ . 💗 ʟᴇᴛs ᴄᴏɴɴᴇᴄᴛ ᴛʜʀᴏᴜɢʜ  ᴛʜᴇ ʙᴇᴀᴜᴛʏ ᴏғ ᴡᴏʀᴅs ᴀɴᴅ sᴘʀᴇᴀᴅ o +ᴠɪᴛʏ ᴛᴏɢᴇᴛʜᴇʀ. ✨ 🔗 ᴊᴏɪɴ ᴜs ᴀɴᴅ ʟᴇᴛ ᴛʜᴇ ᴍᴀɢɪᴄ ᴏғ ǫᴜᴏᴛᴇs ᴛʀᴀɴsғᴏʀᴍ ᴜʀ ʟɪғᴇ !!
1280Loading...
29
We've 51.62% who have unmuted our channel Thank You GUYS 🥳🥳
9520Loading...
30
We've 51.66% Unmuted Members, THANK YOUR GUYS🥳🥳
20Loading...
31
Media files
1 0121Loading...
32
Unmute Our Channel to Show love Your Guys
50Loading...
33
React to Show love Guys
30Loading...
34
𝗗𝗶𝗱 𝗜 𝗴𝗶𝘃𝗲 𝗺𝘆 𝗯𝗲𝘀𝘁 𝗹𝗮𝘀𝘁 𝘄𝗲𝗲𝗸? There are no two same days nor two same weeks The "best" can mean different on "different" days This is why we need to have weekly and monthly goals And the results are that matters, not the effort I wish you a great week ahead 👋
1 0092Loading...
35
Free udemy promo code & live bin to open any website
10Loading...
36
Free udemy promo code & live bin to open any website
10Loading...
37
Free udemy promo code & live bin to open any website
150Loading...
38
Today, I have made you a gift and share a database of 230K telegram channels (parsing). The fields are as follows: username, title, description, number of members, number of views of latest news, update date See it Here:- https://t.me/+SNeJLL8Vc3g0ZDZk Enjoy!
700Loading...
39
𝗛𝗼𝘄 𝘁𝗼 𝘂𝘀𝗲 𝘂𝗻𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗲𝗱 𝗪𝗲𝗯 𝗔𝗣𝗜𝘀? There are several methods to tackle this issue, primarily involving intercepting traffic originating from a web API. If the goal is to intercept HTTP/HTTPS traffic from various sources, one approach involves manually constructing a custom sniffer. However, this method can be burdensome as it requires tailoring the solution for each API individually. Now, Postman offers a solution to sniff traffic from any API with the HTTP/HTTP protocol. What is good about this feature is that traffic capture enables you to generate a Postman collection, which you can then use to test, evaluate, and document captured APIs. Check more at the following link: 🔗https://blog.postman.com/introducing-postman-new-improved-system-proxy/.
9206Loading...
40
✅ Free Courses with Certificate: https://t.me/+t53G-cWDxOc2YzA9 ✅ Best Telegram channels to get free coding & data science resources https://t.me/addlist/ID95piZJZa0wYzk5
8246Loading...
[100% Off] Complete SmartPhone Graphic Design - 3 in 1 Course Free Course Coupon Download Link || https://ift.tt/YwKOFgZ
Mostrar todo...
[100% Off] Complete SmartPhone Graphic Design - 3 in 1 Course Free Course Coupon

The Complete Graphic Design Course with SmartPhone Graphic Design, Smart Phone 3D Logo, and Smartphone 3D Text Design  

So, how they solved it? 𝗧𝗵𝗲𝘆 𝗳𝗶𝗿𝘀𝘁 𝘁𝗿𝗶𝗲𝗱 𝘁𝗼 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝘀. They tracked what Elixir processes were doing, if they were stuck waiting on something, etc. They recorded the event types, how many of each kind of message they received, and their processing times. In addition, they tried to understand how much memory they use, the performances of garbage collectors, etc. After the analysis, they 𝗰𝗿𝗲𝗮𝘁𝗲𝗱 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆: 𝟭. 𝗣𝗮𝘀𝘀𝗶𝘃𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀: Discord significantly reduced the amount of data processed and sent by differentiating between active and passive user connections, cutting the fanout work by 90% for large servers. 𝟮. 𝗥𝗲𝗹𝗮𝘆𝘀: Implementing a relay system (read - multithreading) allowed Discord to split the fanout process across multiple machines, enabling a single guild to utilize more resources and support more prominent communities. Relays maintain connections to the sessions instead of the guild and are responsible for doing fanout with permission checks. 𝟯. 𝗪𝗼𝗿𝗸𝗲𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 𝗮𝗻𝗱 𝗘𝗧𝗦: To maintain server responsiveness, Discord employed worker processes and Erlang Term Storage (ETS) for operations requiring iteration over large sets of members, thus avoiding bottlenecks in the guild process. ETS is an in-memory database that supports the ability of multiple Elixir processes to access it safely. This enables the creation of a new worker process and passes the ETS table so this process can run expensive operations and offload the central guild server. 🔗https://discord.com/blog/maxjourney-pushing-discords-limits-with-a-million-plus-online-users-in-a-single-server
Mostrar todo...
👏 1
Photo unavailableShow in Telegram
𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗗𝗶𝘀𝗰𝗼𝗿𝗱 𝗵𝗮𝗻𝗱𝗹𝗲 𝗮 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 𝗼𝗻𝗹𝗶𝗻𝗲 𝘂𝘀𝗲𝗿𝘀 𝗶𝗻 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝘀𝗲𝗿𝘃𝗲𝗿? As time passed, the overall size of Discord's user base, including its most prominent communities, has grown massively. This affected servers that started to slow down and hit their throughput limits. So, they needed to scale individual Discord servers from tens of thousands to millions of concurrent users. Whenever someone sends a message on Discord or joins a channel, they need to update the date UI of everyone online on that server. They call that server a "𝗴𝘂𝗶𝗹𝗱," which runs in a 𝘀𝗶𝗻𝗴𝗹𝗲 𝗘𝗹𝗶𝘅𝗶𝗿 𝗽𝗿𝗼𝗰𝗲𝘀𝘀, while there is another process (a "𝘀𝗲𝘀𝘀𝗶𝗼𝗻") for each connected client. The guild process tracks sessions of users who are members of that guild and are responsible for actions to those sessions. When sessions get updates, forward them to the web socket socket to the client. The main issue is that 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗺𝗲𝘀𝘀𝗮𝗴𝗲 𝗻𝗲𝗲𝗱𝘀 𝘁𝗼 𝗴𝗼 𝘁𝗼 𝘁𝗵𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗼𝗻𝗹𝗶𝗻𝗲 on that server, which means if a server has 1000 people online and they all send a message once, that's 1 million notifications.
Mostrar todo...
👍 2
⁉️Fact: 🤔 Rejecting girls is a superpower.👀
Mostrar todo...
🔥 1
😌Yes😌
🤔NO🤔
🤗JOIN FOR MORE🤗
Photo unavailableShow in Telegram
❤️ Craving healthier relationships? Spice up your love life on our Telegram channel! Subscribe us for: 🔴Relationship Tips 🔴Inspiring Quotes 🔴Embarrassing Moments 🔴Date Tips 🔴Pickup Lines Single or taken, let’s turn up the heat on love together! 👩‍❤️‍👨💬 CLICK HERE TO JOIN
Mostrar todo...
How old are you ?
Mostrar todo...
18-
18 - 25
25 - 32
32+
Anyone Who wants to Cross Promo with my Channel Dm Me Here @Hacker_club2
Mostrar todo...
Photo unavailableShow in Telegram
𝟮𝟬 𝗦𝗤𝗟 𝗾𝘂𝗲𝗿𝘆 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 Below are the SQL query optimization techniques that I found to be significant, listed in the top 20: 1. Create an index on huge tables (>1.000.000) rows 2. Use EXIST() instead of COUNT() to find an element in the table 3. SELECT fields instead of using SELECT * 4. Avoid Subqueries in WHERE Clause 5. Avoid SELECT DISTINCT where possible 6. Use WHERE Clause instead of HAVING 7. Create joins with INNER JOIN (not WHERE) 8. Use LIMIT to sample query results 9. Use UNION ALL instead of UNION wherever possible 10. Use UNION where instead of WHERE ... or ... query. 11. Run your query during off-peak hours 12. Avoid using OR in join queries 14. Choose GROUP BY over window functions 15. Use derived and temporary tables 16. Drop the index before loading bulk data 16. Use materialized views instead of views 17. Avoid != or <> (not equal) operator 18. Minimize the number of subqueries 19. Use INNER join as little as possible when you can get the same output using LEFT/RIGHT join. 20. Frequently try to use temporary sources to retrieve the same dataset.
Mostrar todo...
Photo unavailableShow in Telegram
𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗜𝘀 𝗡𝗼𝘁 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂 𝗠𝗲𝗮𝗻 𝗜𝘁 𝗜𝘀 In the recent interview with Scott Hanselman, 𝗥𝗼𝗯𝗲𝗿𝘁𝗮 𝗔𝗿𝗰𝗼𝘃𝗲𝗿𝗱𝗲, 𝗛𝗲𝗮𝗱 𝗢𝗳 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄, revealed the story about the architecture of Stack Overflow. They handle more than 6000 requests per second, 2 billion page views per month, and they manage to render a page in about 12 milliseconds. If we think about it a bit, we could imagine they use some kind of 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝗿𝘂𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗖𝗹𝗼𝘂𝗱 𝘄𝗶𝘁𝗵 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀. But the story is a bit different. Their solution is 15 years old, and it is a 𝗯𝗶𝗴 𝗺𝗼𝗻𝗼𝗹𝗶𝘁𝗵𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗼𝗻-𝗽𝗿𝗲𝗺𝗶𝘀𝗲𝘀. It is actually 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝗽𝗽 on IIS, which runs 200 sites. This single app is running on nine web servers and a single SQL Server (with the addition of one hot standby). They also use 𝘁𝘄𝗼 𝗹𝗲𝘃𝗲𝗹𝘀 𝗼𝗳 𝗰𝗮𝗰𝗵𝗲, one on SQL Server with large RAM (1.5TB), where they have 30% of DB access in RAM and also they use two Redis servers (master and replica). Besides this, they have 3 tag engine servers and 3 Elastic search servers, which are used for 34 million daily searches. All this is handled by a 𝘁𝗲𝗮𝗺 𝗼𝗳 𝟱𝟬 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀, who manage to 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝟰 𝗺𝗶𝗻𝘀 several times daily. Their 𝗳𝘂𝗹𝗹 𝘁𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸 is: 🔹 C# + ASP. NET MVC 🔹 Dapper ORM 🔹 StaeckExchange Redis 🔹 MiniProfiler 🔹 Jil JSON Deseliazier 🔹 Exceptional logger for SQL 🔹 Sigil, a .Net CIL generation helper (for when C# isn’t fast enough) 🔹 NetGain, a high-performance web socket server 🔹 Opserver, monitoring dashboard polling most systems and feeding from Orion, Bosun, or WMI. 🔹 Bosun, backend monitoring system, written in Go
Mostrar todo...
👍 1🔥 1