update more cards
parent
476dd1c5d9
commit
5de8751070
|
@ -3,8 +3,7 @@ noatcards = True
|
||||||
isdraft = False
|
isdraft = False
|
||||||
+++
|
+++
|
||||||
|
|
||||||
Application layer
|
# Application layer
|
||||||
-----------------
|
|
||||||
|
|
||||||
### Application layer - Introduction
|
### Application layer - Introduction
|
||||||
|
|
||||||
|
@ -30,13 +29,13 @@ Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-
|
||||||
|
|
||||||
### Disadvantage(s) : application layer
|
### Disadvantage(s) : application layer
|
||||||
|
|
||||||
* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
|
- Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
|
||||||
* Microservices can add complexity in terms of deployments and operations.
|
- Microservices can add complexity in terms of deployments and operations.
|
||||||
|
|
||||||
### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-9) Source(s) and further reading
|
### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-9) Source(s) and further reading
|
||||||
|
|
||||||
* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
|
- [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
|
||||||
* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
|
- [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
|
||||||
* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
|
- [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
|
||||||
* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
|
- [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
|
||||||
* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
|
- [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
|
|
@ -14,8 +14,8 @@ Asynchronous workflows help reduce request times for expensive operations that w
|
||||||
|
|
||||||
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
|
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
|
||||||
|
|
||||||
* An application publishes a job to the queue, then notifies the user of job status
|
- An application publishes a job to the queue, then notifies the user of job status
|
||||||
* A worker picks up the job from the queue, processes it, then signals the job is complete
|
- A worker picks up the job from the queue, processes it, then signals the job is complete
|
||||||
|
|
||||||
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
|
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
|
||||||
|
|
||||||
|
@ -37,11 +37,11 @@ If queues start to grow significantly, the queue size can become larger than mem
|
||||||
|
|
||||||
### Disadvantage(s) : asynchronism
|
### Disadvantage(s) : asynchronism
|
||||||
|
|
||||||
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
|
- Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
|
||||||
|
|
||||||
### Source(s) and further reading
|
### Source(s) and further reading
|
||||||
|
|
||||||
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
|
- [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
|
||||||
* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
|
- [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
|
||||||
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
|
- [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
|
||||||
* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
|
- [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
|
|
@ -25,8 +25,8 @@ Active-active failover can also be referred to as master-master failover.
|
||||||
|
|
||||||
### Disadvantage(s) : failover
|
### Disadvantage(s) : failover
|
||||||
|
|
||||||
* Fail-over adds more hardware and additional complexity.
|
- Fail-over adds more hardware and additional complexity.
|
||||||
* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
|
- There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
|
||||||
|
|
||||||
|
|
||||||
### Master-slave replication
|
### Master-slave replication
|
||||||
|
@ -38,8 +38,8 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
|
||||||
|
|
||||||
### Disadvantage(s) : master-slave replication
|
### Disadvantage(s) : master-slave replication
|
||||||
|
|
||||||
* Additional logic is needed to promote a slave to a master.
|
- Additional logic is needed to promote a slave to a master.
|
||||||
* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
|
- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
|
||||||
|
|
||||||
### Master-master replication
|
### Master-master replication
|
||||||
|
|
||||||
|
@ -50,20 +50,20 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
|
||||||
|
|
||||||
### Disadvantage(s) : master-master replication
|
### Disadvantage(s) : master-master replication
|
||||||
|
|
||||||
* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
|
- You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
|
||||||
* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
|
- Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
|
||||||
* Conflict resolution comes more into play as more write nodes are added and as latency increases.
|
- Conflict resolution comes more into play as more write nodes are added and as latency increases.
|
||||||
* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
|
- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
|
||||||
|
|
||||||
### Disadvantage(s) : replication
|
### Disadvantage(s) : replication
|
||||||
|
|
||||||
* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
|
- There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
|
||||||
* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
|
- Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
|
||||||
* The more read slaves, the more you have to replicate, which leads to greater replication lag.
|
- The more read slaves, the more you have to replicate, which leads to greater replication lag.
|
||||||
* On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
|
- On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
|
||||||
* Replication adds more hardware and additional complexity.
|
- Replication adds more hardware and additional complexity.
|
||||||
|
|
||||||
### Source(s) and further reading: replication
|
### Source(s) and further reading: replication
|
||||||
|
|
||||||
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
|
- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
|
||||||
* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
|
- [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
|
|
@ -0,0 +1,35 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Availability vs consistency
|
||||||
|
|
||||||
|
### CAP theorem
|
||||||
|
|
||||||
|
[![](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67) ](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67)
|
||||||
|
_[Source: CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited) _
|
||||||
|
|
||||||
|
In a distributed computer system, you can only support two of the following guarantees:
|
||||||
|
|
||||||
|
- Consistency - Every read receives the most recent write or an error
|
||||||
|
- Availability - Every request receives a response, without guarantee that it contains the most recent version of the information
|
||||||
|
- Partition Tolerance - The system continues to operate despite arbitrary partitioning due to network failures
|
||||||
|
|
||||||
|
_Networks aren't reliable, so you'll need to support partition tolerance. You'll need to make a software tradeoff between consistency and availability._
|
||||||
|
|
||||||
|
#### CP - consistency and partition tolerance
|
||||||
|
|
||||||
|
Waiting for a response from the partitioned node might result in a timeout error. CP is a good choice if your business needs require atomic reads and writes.
|
||||||
|
|
||||||
|
#### AP - availability and partition tolerance
|
||||||
|
|
||||||
|
Responses return the most recent version of the data, which might not be the latest. Writes might take some time to propagate when the partition is resolved.
|
||||||
|
|
||||||
|
AP is a good choice if the business needs allow for [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) or when the system needs to continue working despite external errors.
|
||||||
|
|
||||||
|
### Source(s) and further reading
|
||||||
|
|
||||||
|
- [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
|
||||||
|
- [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
|
||||||
|
- [CAP FAQ](https://github.com/henryr/cap-faq)
|
|
@ -0,0 +1,13 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
|
||||||
|
# Base 62
|
||||||
|
---
|
||||||
|
|
||||||
|
## Introduction of base 62
|
||||||
|
- Encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
|
||||||
|
- Only one hash result for the original input and and the operation is deterministic (no randomness involved)
|
||||||
|
- Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
|
|
@ -0,0 +1,42 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
|
||||||
|
# Cache locations
|
||||||
|
|
||||||
|
|
||||||
|
### Client caching
|
||||||
|
|
||||||
|
Caches can be located on the client side (OS or browser) , [server side](https://github.com/donnemartin/system-design-primer#reverse-proxy) , or in a distinct cache layer.
|
||||||
|
|
||||||
|
### CDN caching
|
||||||
|
|
||||||
|
[CDNs](https://github.com/donnemartin/system-design-primer#content-delivery-network) are considered a type of cache.
|
||||||
|
|
||||||
|
### Web server caching
|
||||||
|
|
||||||
|
[Reverse proxies](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
|
||||||
|
|
||||||
|
### Database caching
|
||||||
|
|
||||||
|
Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
|
||||||
|
|
||||||
|
### Application caching
|
||||||
|
|
||||||
|
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
|
||||||
|
|
||||||
|
Redis has the following additional features:
|
||||||
|
|
||||||
|
- Persistence option
|
||||||
|
- Built-in data structures such as sorted sets and lists
|
||||||
|
|
||||||
|
There are multiple levels you can cache that fall into two general categories: database queries and objects:
|
||||||
|
|
||||||
|
- Row level
|
||||||
|
- Query-level
|
||||||
|
- Fully-formed serializable objects
|
||||||
|
- Fully-rendered HTML
|
||||||
|
|
||||||
|
Generaly, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
|
|
@ -0,0 +1,37 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Cache-aside
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
[![](https://camo.githubusercontent.com/7f5934e49a678b67f65e5ed53134bc258b007ebb/687474703a2f2f692e696d6775722e636f6d2f4f4e6a4f52716b2e706e67) ](https://camo.githubusercontent.com/7f5934e49a678b67f65e5ed53134bc258b007ebb/687474703a2f2f692e696d6775722e636f6d2f4f4e6a4f52716b2e706e67)
|
||||||
|
_[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast) _
|
||||||
|
|
||||||
|
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
|
||||||
|
|
||||||
|
- Look for entry in cache, resulting in a cache miss
|
||||||
|
- Load entry from the database
|
||||||
|
- Add entry to cache
|
||||||
|
- Return entry
|
||||||
|
```python
|
||||||
|
def get_user(self, user_id) :
|
||||||
|
user = cache.get("user.{0}", user_id)
|
||||||
|
if user is None:
|
||||||
|
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
|
||||||
|
if user is not None:
|
||||||
|
cache.set(key, json.dumps(user))
|
||||||
|
return user
|
||||||
|
```
|
||||||
|
|
||||||
|
[Memcached](https://memcached.org/) is generally used in this manner.
|
||||||
|
|
||||||
|
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
|
||||||
|
|
||||||
|
## Disadvantage(s) : cache-aside
|
||||||
|
|
||||||
|
- Each cache miss results in three trips, which can cause a noticeable delay.
|
||||||
|
- Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
|
||||||
|
- When a node fails, it is replaced by a new, empty node, increasing latency.
|
|
@ -0,0 +1,31 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
|
||||||
|
# Cache
|
||||||
|
|
||||||
|
### Cache - Introduction
|
||||||
|
[![](https://camo.githubusercontent.com/7acedde6aa7853baf2eb4a53f88e2595ebe43756/687474703a2f2f692e696d6775722e636f6d2f51367a32344c612e706e67) ](https://camo.githubusercontent.com/7acedde6aa7853baf2eb4a53f88e2595ebe43756/687474703a2f2f692e696d6775722e636f6d2f51367a32344c612e706e67)
|
||||||
|
_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html) _
|
||||||
|
|
||||||
|
Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
|
||||||
|
|
||||||
|
Databases often benefit from a uniform distribution of reads and writes across its partitions. Popular items can skew the distribution, causing bottlenecks. Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
|
||||||
|
|
||||||
|
### Disadvantage(s) : cache
|
||||||
|
|
||||||
|
- Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
|
||||||
|
- Need to make application changes such as adding Redis or memcached.
|
||||||
|
- Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
|
||||||
|
|
||||||
|
### Source(s) and further reading
|
||||||
|
|
||||||
|
- [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
|
||||||
|
- [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
|
||||||
|
- [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
|
||||||
|
- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
|
||||||
|
- [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
|
||||||
|
- [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
|
||||||
|
- [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
|
|
@ -0,0 +1,5 @@
|
||||||
|
Communication
|
||||||
|
-------------
|
||||||
|
---
|
||||||
|
[![](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067) ](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067)
|
||||||
|
_[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html) _
|
|
@ -0,0 +1,32 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Consistency patterns
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
With multiple copies of the same data, we are faced with options on how to synchronize them so clients have a consistent view of the data. Recall the definition of consistency from the [CAP theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) - Every read receives the most recent write or an error.
|
||||||
|
|
||||||
|
### Weak consistency
|
||||||
|
|
||||||
|
After a write, reads may or may not see it. A best effort approach is taken.
|
||||||
|
|
||||||
|
This approach is seen in systems such as memcached. Weak consistency works well in real time use cases such as VoIP, video chat, and realtime multiplayer games. For example, if you are on a phone call and lose reception for a few seconds, when you regain connection you do not hear what was spoken during connection loss.
|
||||||
|
|
||||||
|
### Eventual consistency
|
||||||
|
|
||||||
|
After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
|
||||||
|
|
||||||
|
This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
|
||||||
|
|
||||||
|
### Strong consistency
|
||||||
|
|
||||||
|
After a write, reads will see it. Data is replicated synchronously.
|
||||||
|
|
||||||
|
This approach is seen in file systems and RDBMSes. Strong consistency works well in systems that need transactions.
|
||||||
|
|
||||||
|
### Source(s) and further reading
|
||||||
|
|
||||||
|
- [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
|
|
@ -0,0 +1,44 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
|
||||||
|
# Content delivery network
|
||||||
|
|
||||||
|
|
||||||
|
[![](https://camo.githubusercontent.com/853a8603651149c686bf3c504769fc594ff08849/687474703a2f2f692e696d6775722e636f6d2f683954417547492e6a7067) ](https://camo.githubusercontent.com/853a8603651149c686bf3c504769fc594ff08849/687474703a2f2f692e696d6775722e636f6d2f683954417547492e6a7067)
|
||||||
|
_[Source: Why use a CDN](https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/) _
|
||||||
|
|
||||||
|
A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JSS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact.
|
||||||
|
|
||||||
|
Serving content from CDNs can significantly improve performance in two ways:
|
||||||
|
|
||||||
|
- Users receive content at data centers close to them
|
||||||
|
- Your servers do not have to serve requests that the CDN fulfills
|
||||||
|
|
||||||
|
### Push CDNs
|
||||||
|
|
||||||
|
Push CDNs receive new content whenever changes occur on your server. You take full responsibility for providing content, uploading directly to the CDN and rewriting URLs to point to the CDN. You can configure when content expires and when it is updated. Content is uploaded only when it is new or changed, minimizing traffic, but maximizing storage.
|
||||||
|
|
||||||
|
Sites with a small amount of traffic or sites with content that isn't often updated work well with push CDNs. Content is placed on the CDNs once, instead of being re-pulled at regular intervals.
|
||||||
|
|
||||||
|
### Pull CDNs
|
||||||
|
|
||||||
|
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the server.
|
||||||
|
|
||||||
|
[time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
|
||||||
|
|
||||||
|
Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
|
||||||
|
|
||||||
|
### Disadvantage(s) : CDN
|
||||||
|
|
||||||
|
- CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
|
||||||
|
- Content might be stale if it is updated before the TTL expires it.
|
||||||
|
- CDNs require changing URLs for static content to point to the CDN.
|
||||||
|
|
||||||
|
### Source(s) and further reading
|
||||||
|
|
||||||
|
- [Globally distributed content delivery](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
|
||||||
|
- [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
|
||||||
|
- [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
|
|
@ -0,0 +1,38 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Database caching, what to cache
|
||||||
|
|
||||||
|
### Introduction
|
||||||
|
|
||||||
|
There are multiple levels you can cache that fall into two general categories: database queries and objects:
|
||||||
|
|
||||||
|
- Row level
|
||||||
|
- Query-level
|
||||||
|
- Fully-formed serializable objects
|
||||||
|
- Fully-rendered HTML
|
||||||
|
|
||||||
|
Generaly, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
|
||||||
|
|
||||||
|
### Caching at the database query level
|
||||||
|
|
||||||
|
Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues:
|
||||||
|
|
||||||
|
- Hard to delete a cached result with complex queries
|
||||||
|
- If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
|
||||||
|
|
||||||
|
### Caching at the object level
|
||||||
|
|
||||||
|
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
|
||||||
|
|
||||||
|
- Remove the object from cache if its underlying data has changed
|
||||||
|
- Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
|
||||||
|
|
||||||
|
Suggestions of what to cache:
|
||||||
|
|
||||||
|
- User sessions
|
||||||
|
- Fully rendered web pages
|
||||||
|
- Activity streams
|
||||||
|
- User graph data
|
|
@ -0,0 +1,23 @@
|
||||||
|
+++
|
||||||
|
noatcards = True
|
||||||
|
isdraft = False
|
||||||
|
+++
|
||||||
|
|
||||||
|
|
||||||
|
# Database
|
||||||
|
|
||||||
|
[![](https://camo.githubusercontent.com/15a7553727e6da98d0de5e9ca3792f6d2b5e92d4/687474703a2f2f692e696d6775722e636f6d2f586b6d3543587a2e706e67) ](https://camo.githubusercontent.com/15a7553727e6da98d0de5e9ca3792f6d2b5e92d4/687474703a2f2f692e696d6775722e636f6d2f586b6d3543587a2e706e67)
|
||||||
|
_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q) _
|
||||||
|
|
||||||
|
### Relational database management system (RDBMS)
|
||||||
|
|
||||||
|
A relational database like SQL is a collection of data items organized in tables.
|
||||||
|
|
||||||
|
ACID is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction) .
|
||||||
|
|
||||||
|
- Atomicity - Each transaction is all or nothing
|
||||||
|
- Consistency - Any tranaction will bring the database from one valid state to another
|
||||||
|
- Isolation - Excuting transactions concurrently has the same results as if the transactions were executed serially
|
||||||
|
- Durability - Once a transaction has been committed, it will remain so
|
||||||
|
|
||||||
|
There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
|
Loading…
Reference in New Issue