YouTube Architecture - High Scalability -
Creative and risky tricks can help you cope in the short term while you work out longer term solutions. The master is multi-threaded and runs on a large machine so it can handle a lot of work. Using a replicating architecture you need to spend a lot of money for incremental bits of write performance. It's not just about getting more writes performance. More expensive hardware gets the more expensive everything else gets too (support contracts). Now they can customize everything and negotiate their own contracts. Can now scale database almost arbitrarily. 2. Managed hosting can't scale with you. 5. Can usually scale web tier by adding more machines. Simplicity allows you to rearchitect more quickly so you can respond to problems. It's true that nobody really knows what simplicity is, but if you aren't afraid to make changes then that's a good sign simplicity is happening. They went through a common evolution: single server, went to a single master with multiple read slaves, then partitioned the database, and then settled on a sharding approach.
C compiler that uses a JIT compiler approach to optimize inner loops. YouTube uses a CDN to distribute their most popular content. If a video is popular enough it will move into the CDN. 4. Use 5 or 6 data centers plus the CDN. 8. Images are replicated to different data centers using BigTable. The fastest cache is in your application server and it doesn't take much time to send precalculated data to all your servers. 4. Application server talks to various databases and other informations sources to get all the data and formats the html page. 7. For images latency matters, especially when you have 60 images on a page. 8. Usually less than 100 ms page service times. This loop runs many times a day. 1. Supports the delivery of over 100 million videos per day. Update 2: YouTube Reaches One Billion Views Per Day. That’s at least 11,574 views per second, 694,444 views per minute, and 41,666,667 views per hour. YouTube grew incredibly fast, to over 100 million video views per day, with only a handful of people responsible for scaling the site. How did they manage to deliver all that video to all those users?
Split into shards with users assigned to different shards. Not too many devices between content and users. 1. NetScalar is used for load balancing and caching static content. 12. Row level caching in the database. Went to database partitioning. 3. So they went to a colocation arrangement. 3. Requests are routed for handling by a Python application server. 14. Some data are calculated and sent to each application so the values are cached in local memory. One of their solutions was prioritize traffic by splitting the data into two clusters: a video watch pool and a general cluster. 6. Video bandwidth dependent, not really latency dependent. Upload, edit, watch, search, and comment on video from your own site without visiting YouTube. Compose your site internally from APIs because you'll need to expose them later anyway. YouTube adds a new rich set of APIs in order to become your video platform leader--all for free. Update: YouTube: The Platform. 1. Stall for time. 6. The Python web code is usually NOT the bottleneck, it spends most of its time blocked on RPCs. Data w as created with t he help of G SA C ontent Generator Demov er sion.
13. Fully formed Python objects are cached. 7. Python allows rapid flexible development and deployment. Slaves are single threaded and usually run on lesser machines and replication is asynchronous, so the slaves can lag significantly behind the master. Suffered from replica lag. 4. Use simple common tools. They use most tools build into Linux and layer on top of those. 3. Use commodity hardware. Living off credit cards so they leased hardware. Living off credit cards so it was the only way. 1. Keep it simple and cheap. 2. Keep a simple network path. 4. Keep it simple! Routers, switches, and other appliances may not be able to keep up with so much load. When they needed more hardware to handle load it took a few days to order and get delivered. 5. Handle random seeks well (SATA, video shorts tweaks). You are also less likely find help on the net. The social networking features of YouTube are less important so they can be routed to a less capable cluster. With a good team all things are possible. 7. You succeed as a team. Have a good cross discipline team that understands the whole system and what's underneath the system.
If you beloved this article and you also would like to get more info about video shorts i implore you to visit our own web-page.
Post a Comment for "YouTube Architecture - High Scalability -"