From f4af06bdffcdf518fdea48a76d32f8004ba1f869 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 14 Mar 2021 17:08:05 +0700
Subject: [PATCH 01/11] poriting to noat.cards
---
.github/PULL_REQUEST_TEMPLATE.md | 6 +-
CONTRIBUTING.md | 14 +-
LICENSE.txt | 2 +-
README-ja.md | 774 ++++++++--------
README-zh-Hans.md | 764 ++++++++--------
README-zh-TW.md | 860 +++++++++---------
README.md | 832 ++++++++---------
TRANSLATIONS.md | 80 +-
generate-epub.sh | 2 +-
resources/noat.cards/Application layer.md | 42 +
resources/noat.cards/Asynchronism.md | 50 +
resources/noat.cards/Availability patterns.md | 69 ++
.../call_center/call_center.ipynb | 84 +-
.../call_center/call_center.py | 82 +-
.../deck_of_cards/deck_of_cards.ipynb | 60 +-
.../deck_of_cards/deck_of_cards.py | 58 +-
.../hash_table/hash_map.ipynb | 32 +-
.../hash_table/hash_map.py | 30 +-
.../lru_cache/lru_cache.ipynb | 42 +-
.../lru_cache/lru_cache.py | 40 +-
.../online_chat/online_chat.ipynb | 62 +-
.../online_chat/online_chat.py | 60 +-
.../parking_lot/parking_lot.ipynb | 80 +-
.../parking_lot/parking_lot.py | 80 +-
.../system_design/mint/README-zh-Hans.md | 186 ++--
solutions/system_design/mint/README.md | 180 ++--
.../system_design/mint/mint_mapreduce.py | 44 +-
solutions/system_design/mint/mint_snippets.py | 20 +-
.../system_design/pastebin/README-zh-Hans.md | 152 ++--
solutions/system_design/pastebin/README.md | 142 +--
solutions/system_design/pastebin/pastebin.py | 34 +-
.../query_cache/README-zh-Hans.md | 142 +--
solutions/system_design/query_cache/README.md | 140 +--
.../query_cache/query_cache_snippets.py | 52 +-
.../sales_rank/README-zh-Hans.md | 186 ++--
solutions/system_design/sales_rank/README.md | 184 ++--
.../sales_rank/sales_rank_mapreduce.py | 72 +-
.../scaling_aws/README-zh-Hans.md | 96 +-
solutions/system_design/scaling_aws/README.md | 98 +-
.../social_graph/README-zh-Hans.md | 172 ++--
.../system_design/social_graph/README.md | 176 ++--
.../social_graph/social_graph_snippets.py | 44 +-
.../system_design/twitter/README-zh-Hans.md | 114 +--
solutions/system_design/twitter/README.md | 116 +--
.../web_crawler/README-zh-Hans.md | 154 ++--
solutions/system_design/web_crawler/README.md | 154 ++--
.../web_crawler/web_crawler_mapreduce.py | 14 +-
.../web_crawler/web_crawler_snippets.py | 52 +-
48 files changed, 3545 insertions(+), 3384 deletions(-)
create mode 100644 resources/noat.cards/Application layer.md
create mode 100644 resources/noat.cards/Asynchronism.md
create mode 100644 resources/noat.cards/Availability patterns.md
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index ca9bd979..93f40e1d 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,11 +1,11 @@
## Review the Contributing Guidelines
-Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md).
+Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) .
### Translations
-See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md). Verify you've:
+See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) . Verify you've:
-* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
+* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
* Prefixed the title with a language code
* Example: "ja: Fix ..."
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 69348619..db116e60 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -7,14 +7,14 @@ Contributions are welcome!
## Bug Reports
-For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues).
+For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues) .
## Pull Requests
The preferred way to contribute is to fork the
[main repository](https://github.com/donnemartin/system-design-primer) on GitHub.
-1. Fork the [main repository](https://github.com/donnemartin/system-design-primer). Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
+1. Fork the [main repository](https://github.com/donnemartin/system-design-primer) . Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
2. Clone this copy to your local disk:
@@ -38,7 +38,7 @@ The preferred way to contribute is to fork the
### GitHub Pull Requests Docs
-If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/).
+If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/) .
## Translations
@@ -48,7 +48,7 @@ We'd like for the guide to be available in many languages. Here is the process f
* Translations follow the content of the original. Contributors must speak at least some English, so that translations do not diverge.
* Each translation has a maintainer to update the translation as the original evolves and to review others' changes. This doesn't require a lot of time, but a review by the maintainer is important to maintain quality.
-See [Translations](TRANSLATIONS.md).
+See [Translations](TRANSLATIONS.md) .
### Changes to translations
@@ -56,7 +56,7 @@ See [Translations](TRANSLATIONS.md).
* Changes that improve translations should be made directly on the file for that language. Pull requests should only modify one language at a time.
* Submit a pull request with changes to the file in that language. Each language has a maintainer, who reviews changes in that language. Then the primary maintainer [@donnemartin](https://github.com/donnemartin) merges it in.
* Prefix pull requests and issues with language codes if they are for that translation only, e.g. "es: Improve grammar", so maintainers can find them easily.
-* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md).
+* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md) .
* You will need to get a review from a native speaker (preferably the language maintainer) before your pull request is merged.
### Adding translations to new languages
@@ -64,9 +64,9 @@ See [Translations](TRANSLATIONS.md).
Translations to new languages are always welcome! Keep in mind a transation must be maintained.
* Do you have time to be a maintainer for a new language? Please see the list of [translations](TRANSLATIONS.md) and tell us so we know we can count on you in the future.
-* Check the [translations](TRANSLATIONS.md), issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
+* Check the [translations](TRANSLATIONS.md) , issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
* If a translation has not yet been started, file an issue for your language so people know you are working on it and we'll coordinate. Confirm you are native level in the language and are willing to maintain the translation, so it's not orphaned.
-* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/), i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country). For languages that have variations, use the shortest tag, such as `zh-Hant`.
+* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/) , i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country) . For languages that have variations, use the shortest tag, such as `zh-Hant`.
* Feel free to invite friends to help your original translation by having them fork your repo, then merging their pull requests to your forked repo. Translations are difficult and usually have errors that others need to find.
* Add links to your translation at the top of every README-XX.md file. For consistency, the link should be added in alphabetical order by ISO code, and the anchor text should be in the native language.
* When you've fully translated the English README.md, comment on the pull request in the main repo that it's ready to be merged.
diff --git a/LICENSE.txt b/LICENSE.txt
index 5a04d642..e2527f91 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1,6 +1,6 @@
I am providing code and resources in this repository to you under an open source
license. Because this is my personal repository, the license you receive to my
-code and resources is from me and not my employer (Facebook).
+code and resources is from me and not my employer (Facebook) .
Copyright 2017 Donne Martin
diff --git a/README-ja.md b/README-ja.md
index ce116705..739a7c5f 100644
--- a/README-ja.md
+++ b/README-ja.md
@@ -1,4 +1,4 @@
-*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
+*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
# システム設計入門
@@ -35,11 +35,11 @@
面接準備に役立つその他のトピック:
-* [学習指針](#学習指針)
-* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
-* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
-* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
-* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
+* [学習指針](#学習指針)
+* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
+* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
+* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
+* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
## 暗記カード
@@ -50,24 +50,24 @@
この[Anki用フラッシュカードデッキ](https://apps.ankiweb.net/) は、間隔反復を活用して、システム設計のキーコンセプトの学習を支援します。
-* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
-* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
-* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
+* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
+* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
+* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
外出先や移動中の勉強に役立つでしょう。
### コーディング技術課題用の問題: 練習用インタラクティブアプリケーション
-コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
+コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
-Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges), which contains an additional Anki deck:
+Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges) , which contains an additional Anki deck:
-* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
+* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## Contributing
@@ -80,11 +80,11 @@ Feel free to submit pull requests to help:
* Fix errors
* Improve sections
* Add new sections
-* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
+* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
-Content that needs some polishing is placed [under development](#under-development).
+Content that needs some polishing is placed [under development](#under-development) .
-Review the [Contributing Guidelines](CONTRIBUTING.md).
+Review the [Contributing Guidelines](CONTRIBUTING.md) .
## Index of system design topics
@@ -97,93 +97,93 @@ Review the [Contributing Guidelines](CONTRIBUTING.md).
-* [System design topics: start here](#system-design-topics-start-here)
- * [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
- * [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
- * [Next steps](#next-steps)
-* [Performance vs scalability](#performance-vs-scalability)
-* [Latency vs throughput](#latency-vs-throughput)
-* [Availability vs consistency](#availability-vs-consistency)
- * [CAP theorem](#cap-theorem)
- * [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
- * [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
-* [Consistency patterns](#consistency-patterns)
- * [Weak consistency](#weak-consistency)
- * [Eventual consistency](#eventual-consistency)
- * [Strong consistency](#strong-consistency)
-* [Availability patterns](#availability-patterns)
- * [Fail-over](#fail-over)
- * [Replication](#replication)
- * [Availability in numbers](#availability-in-numbers)
-* [Domain name system](#domain-name-system)
-* [Content delivery network](#content-delivery-network)
- * [Push CDNs](#push-cdns)
- * [Pull CDNs](#pull-cdns)
-* [Load balancer](#load-balancer)
- * [Active-passive](#active-passive)
- * [Active-active](#active-active)
- * [Layer 4 load balancing](#layer-4-load-balancing)
- * [Layer 7 load balancing](#layer-7-load-balancing)
- * [Horizontal scaling](#horizontal-scaling)
-* [Reverse proxy (web server)](#reverse-proxy-web-server)
- * [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
-* [Application layer](#application-layer)
- * [Microservices](#microservices)
- * [Service discovery](#service-discovery)
-* [Database](#database)
- * [Relational database management system (RDBMS)](#relational-database-management-system-rdbms)
- * [Master-slave replication](#master-slave-replication)
- * [Master-master replication](#master-master-replication)
- * [Federation](#federation)
- * [Sharding](#sharding)
- * [Denormalization](#denormalization)
- * [SQL tuning](#sql-tuning)
- * [NoSQL](#nosql)
- * [Key-value store](#key-value-store)
- * [Document store](#document-store)
- * [Wide column store](#wide-column-store)
- * [Graph Database](#graph-database)
- * [SQL or NoSQL](#sql-or-nosql)
-* [Cache](#cache)
- * [Client caching](#client-caching)
- * [CDN caching](#cdn-caching)
- * [Web server caching](#web-server-caching)
- * [Database caching](#database-caching)
- * [Application caching](#application-caching)
- * [Caching at the database query level](#caching-at-the-database-query-level)
- * [Caching at the object level](#caching-at-the-object-level)
- * [When to update the cache](#when-to-update-the-cache)
- * [Cache-aside](#cache-aside)
- * [Write-through](#write-through)
- * [Write-behind (write-back)](#write-behind-write-back)
- * [Refresh-ahead](#refresh-ahead)
-* [Asynchronism](#asynchronism)
- * [Message queues](#message-queues)
- * [Task queues](#task-queues)
- * [Back pressure](#back-pressure)
-* [Communication](#communication)
- * [Transmission control protocol (TCP)](#transmission-control-protocol-tcp)
- * [User datagram protocol (UDP)](#user-datagram-protocol-udp)
- * [Remote procedure call (RPC)](#remote-procedure-call-rpc)
- * [Representational state transfer (REST)](#representational-state-transfer-rest)
-* [Security](#security)
-* [Appendix](#appendix)
- * [Powers of two table](#powers-of-two-table)
- * [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
- * [Additional system design interview questions](#additional-system-design-interview-questions)
- * [Real world architectures](#real-world-architectures)
- * [Company architectures](#company-architectures)
- * [Company engineering blogs](#company-engineering-blogs)
-* [Under development](#under-development)
-* [Credits](#credits)
-* [Contact info](#contact-info)
-* [License](#license)
+* [System design topics: start here](#system-design-topics-start-here)
+ * [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
+ * [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
+ * [Next steps](#next-steps)
+* [Performance vs scalability](#performance-vs-scalability)
+* [Latency vs throughput](#latency-vs-throughput)
+* [Availability vs consistency](#availability-vs-consistency)
+ * [CAP theorem](#cap-theorem)
+ * [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
+ * [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
+* [Consistency patterns](#consistency-patterns)
+ * [Weak consistency](#weak-consistency)
+ * [Eventual consistency](#eventual-consistency)
+ * [Strong consistency](#strong-consistency)
+* [Availability patterns](#availability-patterns)
+ * [Fail-over](#fail-over)
+ * [Replication](#replication)
+ * [Availability in numbers](#availability-in-numbers)
+* [Domain name system](#domain-name-system)
+* [Content delivery network](#content-delivery-network)
+ * [Push CDNs](#push-cdns)
+ * [Pull CDNs](#pull-cdns)
+* [Load balancer](#load-balancer)
+ * [Active-passive](#active-passive)
+ * [Active-active](#active-active)
+ * [Layer 4 load balancing](#layer-4-load-balancing)
+ * [Layer 7 load balancing](#layer-7-load-balancing)
+ * [Horizontal scaling](#horizontal-scaling)
+* [Reverse proxy (web server) ](#reverse-proxy-web-server)
+ * [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
+* [Application layer](#application-layer)
+ * [Microservices](#microservices)
+ * [Service discovery](#service-discovery)
+* [Database](#database)
+ * [Relational database management system (RDBMS) ](#relational-database-management-system-rdbms)
+ * [Master-slave replication](#master-slave-replication)
+ * [Master-master replication](#master-master-replication)
+ * [Federation](#federation)
+ * [Sharding](#sharding)
+ * [Denormalization](#denormalization)
+ * [SQL tuning](#sql-tuning)
+ * [NoSQL](#nosql)
+ * [Key-value store](#key-value-store)
+ * [Document store](#document-store)
+ * [Wide column store](#wide-column-store)
+ * [Graph Database](#graph-database)
+ * [SQL or NoSQL](#sql-or-nosql)
+* [Cache](#cache)
+ * [Client caching](#client-caching)
+ * [CDN caching](#cdn-caching)
+ * [Web server caching](#web-server-caching)
+ * [Database caching](#database-caching)
+ * [Application caching](#application-caching)
+ * [Caching at the database query level](#caching-at-the-database-query-level)
+ * [Caching at the object level](#caching-at-the-object-level)
+ * [When to update the cache](#when-to-update-the-cache)
+ * [Cache-aside](#cache-aside)
+ * [Write-through](#write-through)
+ * [Write-behind (write-back) ](#write-behind-write-back)
+ * [Refresh-ahead](#refresh-ahead)
+* [Asynchronism](#asynchronism)
+ * [Message queues](#message-queues)
+ * [Task queues](#task-queues)
+ * [Back pressure](#back-pressure)
+* [Communication](#communication)
+ * [Transmission control protocol (TCP) ](#transmission-control-protocol-tcp)
+ * [User datagram protocol (UDP) ](#user-datagram-protocol-udp)
+ * [Remote procedure call (RPC) ](#remote-procedure-call-rpc)
+ * [Representational state transfer (REST) ](#representational-state-transfer-rest)
+* [Security](#security)
+* [Appendix](#appendix)
+ * [Powers of two table](#powers-of-two-table)
+ * [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
+ * [Additional system design interview questions](#additional-system-design-interview-questions)
+ * [Real world architectures](#real-world-architectures)
+ * [Company architectures](#company-architectures)
+ * [Company engineering blogs](#company-engineering-blogs)
+* [Under development](#under-development)
+* [Credits](#credits)
+* [Contact info](#contact-info)
+* [License](#license)
## Study guide
-> Suggested topics to review based on your interview timeline (short, medium, long).
+> Suggested topics to review based on your interview timeline (short, medium, long) .
-
+
**Q: For interviews, do I need to know everything here?**
@@ -245,10 +245,10 @@ Outline a high level design with all important components.
### Step 3: Design core components
-Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md), discuss:
+Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md) , discuss:
* Generating and storing a hash of the full url
- * [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
+ * [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
* Hash collisions
* SQL or NoSQL
* Database schema
@@ -265,24 +265,24 @@ Identify and address bottlenecks, given the constraints. For example, do you ne
* Caching
* Database sharding
-Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics).
+Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics) .
### Back-of-the-envelope calculations
You might be asked to do some estimates by hand. Refer to the [Appendix](#appendix) for the following resources:
-* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
-* [Powers of two table](#powers-of-two-table)
-* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
+* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
+* [Powers of two table](#powers-of-two-table)
+* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
### Source(s) and further reading
Check out the following links to get a better idea of what to expect:
-* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
-* [The system design interview](http://www.hiredintech.com/system-design)
-* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
-* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
+* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
+* [The system design interview](http://www.hiredintech.com/system-design)
+* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
+* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
## System design interview questions with solutions
@@ -302,53 +302,53 @@ Check out the following links to get a better idea of what to expect:
| Design a system that scales to millions of users on AWS | [Solution](solutions/system_design/scaling_aws/README.md) |
| Add a system design question | [Contribute](#contributing) |
-### Design Pastebin.com (or Bit.ly)
+### Design Pastebin.com (or Bit.ly)
-[View exercise and solution](solutions/system_design/pastebin/README.md)
+[View exercise and solution](solutions/system_design/pastebin/README.md)
-
+
-### Design the Twitter timeline and search (or Facebook feed and search)
+### Design the Twitter timeline and search (or Facebook feed and search)
-[View exercise and solution](solutions/system_design/twitter/README.md)
+[View exercise and solution](solutions/system_design/twitter/README.md)
-
+
### Design a web crawler
-[View exercise and solution](solutions/system_design/web_crawler/README.md)
+[View exercise and solution](solutions/system_design/web_crawler/README.md)
-
+
### Design Mint.com
-[View exercise and solution](solutions/system_design/mint/README.md)
+[View exercise and solution](solutions/system_design/mint/README.md)
-
+
### Design the data structures for a social network
-[View exercise and solution](solutions/system_design/social_graph/README.md)
+[View exercise and solution](solutions/system_design/social_graph/README.md)
-
+
### Design a key-value store for a search engine
-[View exercise and solution](solutions/system_design/query_cache/README.md)
+[View exercise and solution](solutions/system_design/query_cache/README.md)
-
+
### Design Amazon's sales ranking by category feature
-[View exercise and solution](solutions/system_design/sales_rank/README.md)
+[View exercise and solution](solutions/system_design/sales_rank/README.md)
-
+
### Design a system that scales to millions of users on AWS
-[View exercise and solution](solutions/system_design/scaling_aws/README.md)
+[View exercise and solution](solutions/system_design/scaling_aws/README.md)
-
+
## Object-oriented design interview questions with solutions
@@ -360,13 +360,13 @@ Check out the following links to get a better idea of what to expect:
| Question | |
|---|---|
-| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
-| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
-| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
-| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
-| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
-| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
-| Design a circular array | [Contribute](#contributing) |
+| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
+| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
+| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
+| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
+| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
+| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
+| Design a circular array | [Contribute](#contributing) |
| Add an object-oriented design question | [Contribute](#contributing) |
## System design topics: start here
@@ -377,7 +377,7 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 1: Review the scalability video lecture
-[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
+[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
* Topics covered:
* Vertical scaling
@@ -389,13 +389,13 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 2: Review the scalability article
-[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
+[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
* Topics covered:
- * [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
- * [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
- * [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
- * [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
+ * [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
+ * [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
+ * [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
+ * [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### Next steps
@@ -420,8 +420,8 @@ Another way to look at performance vs scalability:
### Source(s) and further reading
-* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## Latency vs throughput
@@ -433,7 +433,7 @@ Generally, you should aim for **maximal throughput** with **acceptable latency**
### Source(s) and further reading
-* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
+* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## Availability vs consistency
@@ -465,10 +465,10 @@ AP is a good choice if the business needs allow for [eventual consistency](#even
### Source(s) and further reading
-* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
-* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
-* [CAP FAQ](https://github.com/henryr/cap-faq)
-* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
+* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
+* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
+* [CAP FAQ](https://github.com/henryr/cap-faq)
+* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
## Consistency patterns
@@ -482,7 +482,7 @@ This approach is seen in systems such as memcached. Weak consistency works well
### Eventual consistency
-After a write, reads will eventually see it (typically within milliseconds). Data is replicated asynchronously.
+After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
@@ -494,7 +494,7 @@ This approach is seen in file systems and RDBMSes. Strong consistency works wel
### Source(s) and further reading
-* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
+* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
## Availability patterns
@@ -518,7 +518,7 @@ If the servers are public-facing, the DNS would need to know about the public IP
Active-active failover can also be referred to as master-master failover.
-### Disadvantage(s): failover
+### Disadvantage(s) : failover
* Fail-over adds more hardware and additional complexity.
* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
@@ -529,8 +529,8 @@ Active-active failover can also be referred to as master-master failover.
This topic is further discussed in the [Database](#database) section:
-* [Master-slave replication](#master-slave-replication)
-* [Master-master replication](#master-master-replication)
+* [Master-slave replication](#master-slave-replication)
+* [Master-master replication](#master-master-replication)
### Availability in numbers
@@ -563,7 +563,7 @@ If a service consists of multiple components prone to failure, the service's ove
Overall availability decreases when two components with availability < 100% are in sequence:
```
-Availability (Total) = Availability (Foo) * Availability (Bar)
+Availability (Total) = Availability (Foo) * Availability (Bar)
```
If both `Foo` and `Bar` each had 99.9% availability, their total availability in sequence would be 99.8%.
@@ -588,33 +588,33 @@ If both `Foo` and `Bar` each had 99.9% availability, their total availability in
A Domain Name System (DNS) translates a domain name such as www.example.com to an IP address.
-DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL)](https://en.wikipedia.org/wiki/Time_to_live).
+DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) .
-* **NS record (name server)** - Specifies the DNS servers for your domain/subdomain.
-* **MX record (mail exchange)** - Specifies the mail servers for accepting messages.
-* **A record (address)** - Points a name to an IP address.
-* **CNAME (canonical)** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
+* **NS record (name server) ** - Specifies the DNS servers for your domain/subdomain.
+* **MX record (mail exchange) ** - Specifies the mail servers for accepting messages.
+* **A record (address) ** - Points a name to an IP address.
+* **CNAME (canonical) ** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
Services such as [CloudFlare](https://www.cloudflare.com/dns/) and [Route 53](https://aws.amazon.com/route53/) provide managed DNS services. Some DNS services can route traffic through various methods:
-* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
+* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* Prevent traffic from going to servers under maintenance
* Balance between varying cluster sizes
* A/B testing
-* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
-* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
+* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
+* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
-### Disadvantage(s): DNS
+### Disadvantage(s) : DNS
* Accessing a DNS server introduces a slight delay, although mitigated by caching described above.
-* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729).
-* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/), preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es).
+* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) .
+* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/) , preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es) .
### Source(s) and further reading
-* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
-* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
-* [DNS articles](https://support.dnsimple.com/categories/dns/)
+* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
+* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
+* [DNS articles](https://support.dnsimple.com/categories/dns/)
## Content delivery network
@@ -641,11 +641,11 @@ Sites with a small amount of traffic or sites with content that isn't often upda
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the CDN.
-A [time-to-live (TTL)](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
+A [time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
-### Disadvantage(s): CDN
+### Disadvantage(s) : CDN
* CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
* Content might be stale if it is updated before the TTL expires it.
@@ -653,9 +653,9 @@ Sites with heavy traffic work well with pull CDNs, as traffic is spread out more
### Source(s) and further reading
-* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
-* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
-* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
+* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
+* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
+* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
## Load balancer
@@ -686,13 +686,13 @@ Load balancers can route traffic based on various metrics, including:
* Random
* Least loaded
* Session/cookies
-* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
-* [Layer 4](#layer-4-load-balancing)
-* [Layer 7](#layer-7-load-balancing)
+* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
+* [Layer 4](#layer-4-load-balancing)
+* [Layer 7](#layer-7-load-balancing)
### Layer 4 load balancing
-Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT)](https://www.nginx.com/resources/glossary/layer-4-load-balancing/).
+Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) .
### Layer 7 load balancing
@@ -704,14 +704,14 @@ At the cost of flexibility, layer 4 load balancing requires less time and comput
Load balancers can also help with horizontal scaling, improving performance and availability. Scaling out using commodity machines is more cost efficient and results in higher availability than scaling up a single server on more expensive hardware, called **Vertical Scaling**. It is also easier to hire for talent working on commodity hardware than it is for specialized enterprise systems.
-#### Disadvantage(s): horizontal scaling
+#### Disadvantage(s) : horizontal scaling
* Scaling horizontally introduces complexity and involves cloning servers
* Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
- * Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
+ * Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
* Downstream servers such as caches and databases need to handle more simultaneous connections as upstream servers scale out
-### Disadvantage(s): load balancer
+### Disadvantage(s) : load balancer
* The load balancer can become a performance bottleneck if it does not have enough resources or if it is not configured properly.
* Introducing a load balancer to help eliminate a single point of failure results in increased complexity.
@@ -719,15 +719,15 @@ Load balancers can also help with horizontal scaling, improving performance and
### Source(s) and further reading
-* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
-* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
-* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
+* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
-* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
-* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
-* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
+* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
+* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
+* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
-## Reverse proxy (web server)
+## Reverse proxy (web server)
@@ -758,17 +758,17 @@ Additional benefits include:
* Reverse proxies can be useful even with just one web server or application server, opening up the benefits described in the previous section.
* Solutions such as NGINX and HAProxy can support both layer 7 reverse proxying and load balancing.
-### Disadvantage(s): reverse proxy
+### Disadvantage(s) : reverse proxy
* Introducing a reverse proxy results in increased complexity.
* A single reverse proxy is a single point of failure, configuring multiple reverse proxies (ie a [failover](https://en.wikipedia.org/wiki/Failover)) further increases complexity.
### Source(s) and further reading
-* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
-* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
-* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
-* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
+* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
+* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
## Application layer
@@ -780,30 +780,30 @@ Additional benefits include:
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers. The **single responsibility principle** advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
-Workers in the application layer also help enable [asynchronism](#asynchronism).
+Workers in the application layer also help enable [asynchronism](#asynchronism) .
### Microservices
-Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices), which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. 1
+Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. 1
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
### Service Discovery
-Systems such as [Consul](https://www.consul.io/docs/index.html), [Etcd](https://coreos.com/etcd/docs/latest), and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
+Systems such as [Consul](https://www.consul.io/docs/index.html) , [Etcd](https://coreos.com/etcd/docs/latest) , and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
-### Disadvantage(s): application layer
+### Disadvantage(s) : application layer
-* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system).
+* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
* Microservices can add complexity in terms of deployments and operations.
### Source(s) and further reading
-* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
-* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
-* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
-* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
+* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
+* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
+* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
+* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## Database
@@ -813,11 +813,11 @@ Systems such as [Consul](https://www.consul.io/docs/index.html), [Etcd](https://
Source: Scaling up to your first 10 million users
-### Relational database management system (RDBMS)
+### Relational database management system (RDBMS)
A relational database like SQL is a collection of data items organized in tables.
-**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction).
+**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction) .
* **Atomicity** - Each transaction is all or nothing
* **Consistency** - Any transaction will bring the database from one valid state to another
@@ -836,10 +836,10 @@ The master serves reads and writes, replicating writes to one or more slaves, wh
Source: Scalability, availability, stability, patterns
-##### Disadvantage(s): master-slave replication
+##### Disadvantage(s) : master-slave replication
* Additional logic is needed to promote a slave to a master.
-* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
+* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
#### Master-master replication
@@ -851,14 +851,14 @@ Both masters serve reads and writes and coordinate with each other on writes. I
Source: Scalability, availability, stability, patterns
-##### Disadvantage(s): master-master replication
+##### Disadvantage(s) : master-master replication
* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
* Conflict resolution comes more into play as more write nodes are added and as latency increases.
-* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
+* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
-##### Disadvantage(s): replication
+##### Disadvantage(s) : replication
* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
@@ -868,8 +868,8 @@ Both masters serve reads and writes and coordinate with each other on writes. I
##### Source(s) and further reading: replication
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
-* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
#### Federation
@@ -881,16 +881,16 @@ Both masters serve reads and writes and coordinate with each other on writes. I
Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: **forums**, **users**, and **products**, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
-##### Disadvantage(s): federation
+##### Disadvantage(s) : federation
* Federation is not effective if your schema requires huge functions or tables.
* You'll need to update your application logic to determine which database to read and write.
-* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers).
+* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers) .
* Federation adds more hardware and additional complexity.
##### Source(s) and further reading: federation
-* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
+* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
#### Sharding
@@ -902,11 +902,11 @@ Federation (or functional partitioning) splits up databases by function. For ex
Sharding distributes data across different databases such that each database can only manage a subset of the data. Taking a users database as an example, as the number of users increases, more shards are added to the cluster.
-Similar to the advantages of [federation](#federation), sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
+Similar to the advantages of [federation](#federation) , sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
Common ways to shard a table of users is either through the user's last name initial or the user's geographic location.
-##### Disadvantage(s): sharding
+##### Disadvantage(s) : sharding
* You'll need to update your application logic to work with shards, which could result in complex SQL queries.
* Data distribution can become lopsided in a shard. For example, a set of power users on a shard could result in increased load to that shard compared to others.
@@ -916,19 +916,19 @@ Common ways to shard a table of users is either through the user's last name ini
##### Source(s) and further reading: sharding
-* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
+* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [Shard database architecture](https://en.wikipedia.org/wiki/Shard_(database_architecture))
-* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
+* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### Denormalization
Denormalization attempts to improve read performance at the expense of some write performance. Redundant copies of the data are written in multiple tables to avoid expensive joins. Some RDBMS such as [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) and Oracle support [materialized views](https://en.wikipedia.org/wiki/Materialized_view) which handle the work of storing redundant information and keeping redundant copies consistent.
-Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding), managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
+Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding) , managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A read resulting in a complex database join can be very expensive, spending a significant amount of time on disk operations.
-##### Disadvantage(s): denormalization
+##### Disadvantage(s) : denormalization
* Data is duplicated.
* Constraints can help redundant copies of information stay in sync, which increases complexity of the database design.
@@ -936,7 +936,7 @@ In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A rea
###### Source(s) and further reading: denormalization
-* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
+* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
#### SQL tuning
@@ -944,7 +944,7 @@ SQL tuning is a broad topic and many [books](https://www.amazon.com/s/ref=nb_sb_
It's important to **benchmark** and **profile** to simulate and uncover bottlenecks.
-* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html).
+* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) .
* **Profile** - Enable tools such as the [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) to help track performance issues.
Benchmarking and profiling might point you to the following optimizations.
@@ -958,8 +958,8 @@ Benchmarking and profiling might point you to the following optimizations.
* Use `INT` for larger numbers up to 2^32 or 4 billion.
* Use `DECIMAL` for currency to avoid floating point representation errors.
* Avoid storing large `BLOBS`, store the location of where to get the object instead.
-* `VARCHAR(255)` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
-* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search).
+* `VARCHAR(255) ` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
+* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) .
##### Use good indices
@@ -979,32 +979,32 @@ Benchmarking and profiling might point you to the following optimizations.
##### Tune the query cache
-* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/).
+* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/) .
##### Source(s) and further reading: SQL tuning
-* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
-* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
-* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
-* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
+* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
+* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
+* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
+* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
-NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency).
+NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency) .
-**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem), BASE chooses availability over consistency.
+**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem) , BASE chooses availability over consistency.
* **Basically available** - the system guarantees availability.
* **Soft state** - the state of the system may change over time, even without input.
* **Eventual consistency** - the system will become consistent over a period of time, given that the system doesn't receive input during that period.
-In addition to choosing between [SQL or NoSQL](#sql-or-nosql), it is helpful to understand which type of NoSQL database best fits your use case(s). We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
+In addition to choosing between [SQL or NoSQL](#sql-or-nosql) , it is helpful to understand which type of NoSQL database best fits your use case(s) . We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
#### Key-value store
> Abstraction: hash table
-A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order), allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
+A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order) , allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
Key-value stores provide high performance and are often used for simple data models or for rapidly-changing data, such as an in-memory cache layer. Since they offer only a limited set of operations, complexity is shifted to the application layer if additional operations are needed.
@@ -1012,16 +1012,16 @@ A key-value store is the basis for more complex systems such as a document store
##### Source(s) and further reading: key-value store
-* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
-* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
-* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
-* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
+* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
+* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
+* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
+* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### Document store
> Abstraction: key-value store with documents stored as values
-A document store is centered around documents (XML, JSON, binary, etc), where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
+A document store is centered around documents (XML, JSON, binary, etc) , where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
Based on the underlying implementation, documents are organized by collections, tags, metadata, or directories. Although documents can be organized or grouped together, documents may have fields that are completely different from each other.
@@ -1031,10 +1031,10 @@ Document stores provide high flexibility and are often used for working with occ
##### Source(s) and further reading: document store
-* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
-* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
-* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
-* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
+* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
+* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
+* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
+* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### Wide column store
@@ -1046,7 +1046,7 @@ Document stores provide high flexibility and are often used for working with occ
> Abstraction: nested map `ColumnFamily>`
-A wide column store's basic unit of data is a column (name/value pair). A column can be grouped in column families (analogous to a SQL table). Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
+A wide column store's basic unit of data is a column (name/value pair) . A column can be grouped in column families (analogous to a SQL table) . Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
Google introduced [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) as the first wide column store, which influenced the open-source [HBase](https://www.edureka.co/blog/hbase-architecture/) often-used in the Hadoop ecosystem, and [Cassandra](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html) from Facebook. Stores such as BigTable, HBase, and Cassandra maintain keys in lexicographic order, allowing efficient retrieval of selective key ranges.
@@ -1054,10 +1054,10 @@ Wide column stores offer high availability and high scalability. They are often
##### Source(s) and further reading: wide column store
-* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
-* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
-* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
-* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
+* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
+* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
+* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
+* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
#### Graph database
@@ -1071,21 +1071,21 @@ Wide column stores offer high availability and high scalability. They are often
In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
-Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest).
+Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest) .
##### Source(s) and further reading: graph
-* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
-* [Neo4j](https://neo4j.com/)
-* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
+* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
+* [Neo4j](https://neo4j.com/)
+* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### Source(s) and further reading: NoSQL
-* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
-* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
-* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
-* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
-* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
+* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
+* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
+* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
+* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
+* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQL or NoSQL
@@ -1126,8 +1126,8 @@ Sample data well-suited for NoSQL:
##### Source(s) and further reading: SQL or NoSQL
-* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
-* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
+* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
+* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
## Cache
@@ -1143,7 +1143,7 @@ Databases often benefit from a uniform distribution of reads and writes across i
### Client caching
-Caches can be located on the client side (OS or browser), [server side](#reverse-proxy-web-server), or in a distinct cache layer.
+Caches can be located on the client side (OS or browser) , [server side](#reverse-proxy-web-server) , or in a distinct cache layer.
### CDN caching
@@ -1159,7 +1159,7 @@ Your database usually includes some level of caching in a default configuration,
### Application caching
-In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
+In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
@@ -1184,7 +1184,7 @@ Whenever you query the database, hash the query as a key and store the result to
### Caching at the object level
-See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s):
+See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
* Remove the object from cache if its underlying data has changed
* Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
@@ -1216,12 +1216,12 @@ The application is responsible for reading and writing from storage. The cache
* Return entry
```python
-def get_user(self, user_id):
- user = cache.get("user.{0}", user_id)
+def get_user(self, user_id) :
+ user = cache.get("user.{0}", user_id)
if user is None:
- user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
+ user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
- key = "user.{0}".format(user_id)
+ key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@@ -1230,7 +1230,7 @@ def get_user(self, user_id):
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
-##### Disadvantage(s): cache-aside
+##### Disadvantage(s) : cache-aside
* Each cache miss results in three trips, which can cause a noticeable delay.
* Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
@@ -1253,25 +1253,25 @@ The application uses the cache as the main data store, reading and writing data
Application code:
```python
-set_user(12345, {"foo":"bar"})
+set_user(12345, {"foo":"bar"})
```
Cache code:
```python
-def set_user(user_id, values):
- user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
- cache.set(user_id, user)
+def set_user(user_id, values) :
+ user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
+ cache.set(user_id, user)
```
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
-##### Disadvantage(s): write through
+##### Disadvantage(s) : write through
* When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue.
* Most data written might never be read, which can be minimized with a TTL.
-#### Write-behind (write-back)
+#### Write-behind (write-back)
@@ -1284,7 +1284,7 @@ In write-behind, the application does the following:
* Add/update entry in cache
* Asynchronously write entry to the data store, improving write performance
-##### Disadvantage(s): write-behind
+##### Disadvantage(s) : write-behind
* There could be data loss if the cache goes down prior to its contents hitting the data store.
* It is more complex to implement write-behind than it is to implement cache-aside or write-through.
@@ -1301,24 +1301,24 @@ You can configure the cache to automatically refresh any recently accessed cache
Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
-##### Disadvantage(s): refresh-ahead
+##### Disadvantage(s) : refresh-ahead
* Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
-### Disadvantage(s): cache
+### Disadvantage(s) : cache
-* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms).
+* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
* Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
* Need to make application changes such as adding Redis or memcached.
### Source(s) and further reading
-* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
-* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
-* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
-* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
-* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
+* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
+* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
+* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
+* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
## Asynchronism
@@ -1340,32 +1340,32 @@ Message queues receive, hold, and deliver messages. If an operation is too slow
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
-**[Redis](https://redis.io/)** is useful as a simple message broker but messages can be lost.
+**[Redis](https://redis.io/) ** is useful as a simple message broker but messages can be lost.
-**[RabbitMQ](https://www.rabbitmq.com/)** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
+**[RabbitMQ](https://www.rabbitmq.com/) ** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
-**[Amazon SQS](https://aws.amazon.com/sqs/)** is hosted but can have high latency and has the possibility of messages being delivered twice.
+**[Amazon SQS](https://aws.amazon.com/sqs/) ** is hosted but can have high latency and has the possibility of messages being delivered twice.
### Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
-**[Celery](https://docs.celeryproject.org/en/stable/)** has support for scheduling and primarily has python support.
+**[Celery](https://docs.celeryproject.org/en/stable/) ** has support for scheduling and primarily has python support.
### Back pressure
-If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff).
+If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
-### Disadvantage(s): asynchronism
+### Disadvantage(s) : asynchronism
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
### Source(s) and further reading
-* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
-* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
-* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
-* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
+* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
+* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
+* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
+* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## Communication
@@ -1375,11 +1375,11 @@ If queues start to grow significantly, the queue size can become larger than mem
Source: OSI 7 layer model
-### Hypertext transfer protocol (HTTP)
+### Hypertext transfer protocol (HTTP)
HTTP is a method for encoding and transporting data between a client and a server. It is a request/response protocol: clients issue requests and servers issue responses with relevant content and completion status info about the request. HTTP is self-contained, allowing requests and responses to flow through many intermediate routers and servers that perform load balancing, caching, encryption, and compression.
-A basic HTTP request consists of a verb (method) and a resource (endpoint). Below are common HTTP verbs:
+A basic HTTP request consists of a verb (method) and a resource (endpoint) . Below are common HTTP verbs:
| Verb | Description | Idempotent* | Safe | Cacheable |
|---|---|---|---|---|
@@ -1395,11 +1395,11 @@ HTTP is an application layer protocol relying on lower-level protocols such as *
#### Source(s) and further reading: HTTP
-* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
-* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
-* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
+* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
+* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
+* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
-### Transmission control protocol (TCP)
+### Transmission control protocol (TCP)
-TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol). Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking). All packets sent are guaranteed to reach the destination in the original order and without corruption through:
+TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol) . Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking) . All packets sent are guaranteed to reach the destination in the original order and without corruption through:
* Sequence numbers and [checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) for each packet
* [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) packets and automatic retransmission
-If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control). These guarantees cause delays and generally result in less efficient transmission than UDP.
+If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) . These guarantees cause delays and generally result in less efficient transmission than UDP.
To ensure high throughput, web servers can keep a large number of TCP connections open, resulting in high memory usage. It can be expensive to have a large number of open connections between web server threads and say, a [memcached](https://memcached.org/) server. [Connection pooling](https://en.wikipedia.org/wiki/Connection_pool) can help in addition to switching to UDP where applicable.
@@ -1423,7 +1423,7 @@ Use TCP over UDP when:
* You need all of the data to arrive intact
* You want to automatically make a best estimate use of the network throughput
-### User datagram protocol (UDP)
+### User datagram protocol (UDP)
@@ -1445,14 +1445,14 @@ Use UDP over TCP when:
#### Source(s) and further reading: TCP and UDP
-* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
-* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
-* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
-* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
-* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
-* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
+* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
+* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
+* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
+* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
+* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
+* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
-### Remote procedure call (RPC)
+### Remote procedure call (RPC)
-In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/), [Thrift](https://thrift.apache.org/), and [Avro](https://avro.apache.org/docs/current/).
+In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/) , [Thrift](https://thrift.apache.org/) , and [Avro](https://avro.apache.org/docs/current/) .
RPC is a request-response protocol:
@@ -1494,23 +1494,23 @@ Choose a native library (aka SDK) when:
HTTP APIs following **REST** tend to be used more often for public APIs.
-#### Disadvantage(s): RPC
+#### Disadvantage(s) : RPC
* RPC clients become tightly coupled to the service implementation.
* A new API must be defined for every new operation or use case.
* It can be difficult to debug RPC.
-* You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/).
+* You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/) .
-### Representational state transfer (REST)
+### Representational state transfer (REST)
REST is an architectural style enforcing a client/server model where the client acts on a set of resources managed by the server. The server provides a representation of resources and actions that can either manipulate or get a new representation of resources. All communication must be stateless and cacheable.
There are four qualities of a RESTful interface:
-* **Identify resources (URI in HTTP)** - use the same URI regardless of any operation.
-* **Change with representations (Verbs in HTTP)** - use verbs, headers, and body.
-* **Self-descriptive error message (status response in HTTP)** - Use status codes, don't reinvent the wheel.
-* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP)** - your web service should be fully accessible in a browser.
+* **Identify resources (URI in HTTP) ** - use the same URI regardless of any operation.
+* **Change with representations (Verbs in HTTP) ** - use verbs, headers, and body.
+* **Self-descriptive error message (status response in HTTP) ** - Use status codes, don't reinvent the wheel.
+* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP) ** - your web service should be fully accessible in a browser.
Sample REST calls:
@@ -1521,9 +1521,9 @@ PUT /someresources/anId
{"anotherdata": "another value"}
```
-REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md), and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
+REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) , and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
-#### Disadvantage(s): REST
+#### Disadvantage(s) : REST
* With REST being focused on exposing data, it might not be a good fit if resources are not naturally organized or accessed in a simple hierarchy. For example, returning all updated records from the past hour matching a particular set of events is not easily expressed as a path. With REST, it is likely to be implemented with a combination of URI path, query parameters, and possibly the request body.
* REST typically relies on a few verbs (GET, POST, PUT, DELETE, and PATCH) which sometimes doesn't fit your use case. For example, moving expired documents to the archive folder might not cleanly fit within these verbs.
@@ -1548,31 +1548,31 @@ REST is focused on exposing data. It minimizes the coupling between client/serv
#### Source(s) and further reading: REST and RPC
-* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
-* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
-* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
-* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
-* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
-* [Thrift](https://code.facebook.com/posts/1468950976659943/)
-* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
+* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
+* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
+* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
+* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
+* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
+* [Thrift](https://code.facebook.com/posts/1468950976659943/)
+* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## Security
-This section could use some updates. Consider [contributing](#contributing)!
+This section could use some updates. Consider [contributing](#contributing) !
Security is a broad topic. Unless you have considerable experience, a security background, or are applying for a position that requires knowledge of security, you probably won't need to know more than the basics:
* Encrypt in transit and at rest.
-* Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection).
+* Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection) .
* Use parameterized queries to prevent SQL injection.
-* Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).
+* Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) .
### Source(s) and further reading
-* [API security checklist](https://github.com/shieldfy/API-Security-Checklist)
-* [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
-* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
+* [API security checklist](https://github.com/shieldfy/API-Security-Checklist)
+* [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
+* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## Appendix
@@ -1595,7 +1595,7 @@ Power Exact Value Approx Value Bytes
#### Source(s) and further reading
-* [Powers of two](https://en.wikipedia.org/wiki/Power_of_two)
+* [Powers of two](https://en.wikipedia.org/wiki/Power_of_two)
### Latency numbers every programmer should know
@@ -1636,14 +1636,14 @@ Handy metrics based on numbers above:
#### Latency numbers visualized
-
+
#### Source(s) and further reading
-* [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
-* [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
-* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
-* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
+* [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
+* [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
+* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
+* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
### Additional system design interview questions
@@ -1652,28 +1652,28 @@ Handy metrics based on numbers above:
| Question | Reference(s) |
|---|---|
| Design a file sync service like Dropbox | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
-| Design a search engine like Google | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) [stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) [ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) [stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
+| Design a search engine like Google | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) [stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) [ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) [stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Design a scalable web crawler like Google | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
-| Design Google docs | [code.google.com](https://code.google.com/p/google-mobwrite/) [neil.fraser.name](https://neil.fraser.name/writing/sync/) |
+| Design Google docs | [code.google.com](https://code.google.com/p/google-mobwrite/) [neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Design a key-value store like Redis | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| Design a cache system like Memcached | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
-| Design a recommendation system like Amazon's | [hulu.com](https://web.archive.org/web/20170406065247/http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) [ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
+| Design a recommendation system like Amazon's | [hulu.com](https://web.archive.org/web/20170406065247/http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) [ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| Design a tinyurl system like Bitly | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
-| Design a chat app like WhatsApp | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
-| Design a picture sharing system like Instagram | [highscalability.com](http://highscalability.com/flickr-architecture) [highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
-| Design the Facebook news feed function | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) [quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) [slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
-| Design the Facebook timeline function | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) [highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
-| Design the Facebook chat function | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) [facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
-| Design a graph search function like Facebook's | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
+| Design a chat app like WhatsApp | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
+| Design a picture sharing system like Instagram | [highscalability.com](http://highscalability.com/flickr-architecture) [highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
+| Design the Facebook news feed function | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) [quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) [slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
+| Design the Facebook timeline function | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) [highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
+| Design the Facebook chat function | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) [facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
+| Design a graph search function like Facebook's | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| Design a content delivery network like CloudFlare | [figshare.com](https://figshare.com/articles/Globally_distributed_content_delivery/6605972) |
-| Design a trending topic system like Twitter's | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) [snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
-| Design a random ID generation system | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) [github.com](https://github.com/twitter/snowflake/) |
-| Return the top k requests during a time interval | [cs.ucsb.edu](https://www.cs.ucsb.edu/sites/cs.ucsb.edu/files/docs/reports/2005-23.pdf) [wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
+| Design a trending topic system like Twitter's | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) [snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
+| Design a random ID generation system | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) [github.com](https://github.com/twitter/snowflake/) |
+| Return the top k requests during a time interval | [cs.ucsb.edu](https://www.cs.ucsb.edu/sites/cs.ucsb.edu/files/docs/reports/2005-23.pdf) [wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| Design a system that serves data from multiple data centers | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
-| Design an online multiplayer card game | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) [buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
-| Design a garbage collection system | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) [washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
+| Design an online multiplayer card game | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) [buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
+| Design a garbage collection system | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) [washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| Design an API rate limiter | [https://stripe.com/blog/](https://stripe.com/blog/rate-limiters) |
-| Design a Stock Exchange (like NASDAQ or Binance) | [Jane Street](https://youtu.be/b1e4t2k2KJY) [Golang Implementation](https://around25.com/blog/building-a-trading-engine-for-a-crypto-exchange/) [Go Implemenation](http://bhomnick.net/building-a-simple-limit-order-in-go/) |
+| Design a Stock Exchange (like NASDAQ or Binance) | [Jane Street](https://youtu.be/b1e4t2k2KJY) [Golang Implementation](https://around25.com/blog/building-a-trading-engine-for-a-crypto-exchange/) [Go Implemenation](http://bhomnick.net/building-a-simple-limit-order-in-go/) |
| Add a system design question | [Contribute](#contributing) |
### Real world architectures
@@ -1700,18 +1700,18 @@ Handy metrics based on numbers above:
| | | |
| Data store | **Bigtable** - Distributed column-oriented database from Google | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| Data store | **HBase** - Open source implementation of Bigtable | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
-| Data store | **Cassandra** - Distributed column-oriented database from Facebook | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
+| Data store | **Cassandra** - Distributed column-oriented database from Facebook | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| Data store | **DynamoDB** - Document-oriented database from Amazon | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| Data store | **MongoDB** - Document-oriented database | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| Data store | **Spanner** - Globally-distributed database from Google | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| Data store | **Memcached** - Distributed memory caching system | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Data store | **Redis** - Distributed memory caching system with persistence and value types | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
-| File system | **Google File System (GFS)** - Distributed file system | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
-| File system | **Hadoop File System (HDFS)** - Open source implementation of GFS | [apache.org](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) |
+| File system | **Google File System (GFS) ** - Distributed file system | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
+| File system | **Hadoop File System (HDFS) ** - Open source implementation of GFS | [apache.org](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) |
| | | |
| Misc | **Chubby** - Lock service for loosely-coupled distributed systems from Google | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
-| Misc | **Dapper** - Distributed systems tracing infrastructure | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
+| Misc | **Dapper** - Distributed systems tracing infrastructure | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
| Misc | **Kafka** - Pub/sub message queue from LinkedIn | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| Misc | **Zookeeper** - Centralized infrastructure and services enabling synchronization | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | Add an architecture | [Contribute](#contributing) |
@@ -1726,23 +1726,23 @@ Handy metrics based on numbers above:
| DropBox | [How we've scaled Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [Operating At 100,000 duh nuh nuhs per second](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google architecture](http://highscalability.com/google-architecture) |
-| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) [What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
+| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) [What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv's live video broadcasting architecture](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
-| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) [TAO: Facebook’s distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) [Facebook’s photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) [How Facebook Live Streams To 800,000 Simultaneous Viewers](http://highscalability.com/blog/2016/6/27/how-facebook-live-streams-to-800000-simultaneous-viewers.html) |
+| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) [TAO: Facebook’s distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) [Facebook’s photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) [How Facebook Live Streams To 800,000 Simultaneous Viewers](http://highscalability.com/blog/2016/6/27/how-facebook-live-streams-to-800000-simultaneous-viewers.html) |
| Flickr | [Flickr architecture](http://highscalability.com/flickr-architecture) |
| Mailbox | [From 0 to one million users in 6 weeks](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
-| Netflix | [A 360 Degree View Of The Entire Netflix Stack](http://highscalability.com/blog/2015/11/9/a-360-degree-view-of-the-entire-netflix-stack.html) [Netflix: What Happens When You Press Play?](http://highscalability.com/blog/2017/12/11/netflix-what-happens-when-you-press-play.html) |
-| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) [18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
+| Netflix | [A 360 Degree View Of The Entire Netflix Stack](http://highscalability.com/blog/2015/11/9/a-360-degree-view-of-the-entire-netflix-stack.html) [Netflix: What Happens When You Press Play?](http://highscalability.com/blog/2017/12/11/netflix-what-happens-when-you-press-play.html) |
+| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) [18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [50 million monthly users and growing](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish architecture](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [How they handle 1.3 billion transactions a day](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow architecture](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [40M visitors, 200M dynamic page views, 30TB data](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [15 billion page views a month](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
-| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) [Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) [150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) [Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) [Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI) [Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU) [How Twitter Handles 3,000 Images Per Second](http://highscalability.com/blog/2016/4/20/how-twitter-handles-3000-images-per-second.html) |
-| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) [Lessons Learned From Scaling Uber To 2000 Engineers, 1000 Services, And 8000 Git Repositories](http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html) |
+| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) [Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) [150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) [Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) [Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI) [Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU) [How Twitter Handles 3,000 Images Per Second](http://highscalability.com/blog/2016/4/20/how-twitter-handles-3000-images-per-second.html) |
+| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) [Lessons Learned From Scaling Uber To 2000 Engineers, 1000 Services, And 8000 Git Repositories](http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html) |
| WhatsApp | [The WhatsApp architecture Facebook bought for $19 billion](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
-| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8) [YouTube architecture](http://highscalability.com/youtube-architecture) |
+| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8) [YouTube architecture](http://highscalability.com/youtube-architecture) |
### Company engineering blogs
@@ -1750,60 +1750,60 @@ Handy metrics based on numbers above:
>
> Questions you encounter might be from the same domain.
-* [Airbnb Engineering](http://nerds.airbnb.com/)
-* [Atlassian Developers](https://developer.atlassian.com/blog/)
-* [AWS Blog](https://aws.amazon.com/blogs/aws/)
-* [Bitly Engineering Blog](http://word.bitly.com/)
-* [Box Blogs](https://blog.box.com/blog/category/engineering)
-* [Cloudera Developer Blog](http://blog.cloudera.com/)
-* [Dropbox Tech Blog](https://tech.dropbox.com/)
-* [Engineering at Quora](https://www.quora.com/q/quoraengineering)
-* [Ebay Tech Blog](http://www.ebaytechblog.com/)
-* [Evernote Tech Blog](https://blog.evernote.com/tech/)
-* [Etsy Code as Craft](http://codeascraft.com/)
-* [Facebook Engineering](https://www.facebook.com/Engineering)
-* [Flickr Code](http://code.flickr.net/)
-* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
-* [GitHub Engineering Blog](http://githubengineering.com/)
-* [Google Research Blog](http://googleresearch.blogspot.com/)
-* [Groupon Engineering Blog](https://engineering.groupon.com/)
-* [Heroku Engineering Blog](https://engineering.heroku.com/)
-* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
-* [High Scalability](http://highscalability.com/)
-* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
-* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
-* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
-* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
-* [Microsoft Engineering](https://engineering.microsoft.com/)
-* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
-* [Netflix Tech Blog](http://techblog.netflix.com/)
-* [Paypal Developer Blog](https://medium.com/paypal-engineering)
-* [Pinterest Engineering Blog](https://medium.com/@Pinterest_Engineering)
-* [Reddit Blog](http://www.redditblog.com/)
-* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
-* [Slack Engineering Blog](https://slack.engineering/)
-* [Spotify Labs](https://labs.spotify.com/)
-* [Twilio Engineering Blog](http://www.twilio.com/engineering)
-* [Twitter Engineering](https://blog.twitter.com/engineering/)
-* [Uber Engineering Blog](http://eng.uber.com/)
-* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
-* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
-* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
+* [Airbnb Engineering](http://nerds.airbnb.com/)
+* [Atlassian Developers](https://developer.atlassian.com/blog/)
+* [AWS Blog](https://aws.amazon.com/blogs/aws/)
+* [Bitly Engineering Blog](http://word.bitly.com/)
+* [Box Blogs](https://blog.box.com/blog/category/engineering)
+* [Cloudera Developer Blog](http://blog.cloudera.com/)
+* [Dropbox Tech Blog](https://tech.dropbox.com/)
+* [Engineering at Quora](https://www.quora.com/q/quoraengineering)
+* [Ebay Tech Blog](http://www.ebaytechblog.com/)
+* [Evernote Tech Blog](https://blog.evernote.com/tech/)
+* [Etsy Code as Craft](http://codeascraft.com/)
+* [Facebook Engineering](https://www.facebook.com/Engineering)
+* [Flickr Code](http://code.flickr.net/)
+* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
+* [GitHub Engineering Blog](http://githubengineering.com/)
+* [Google Research Blog](http://googleresearch.blogspot.com/)
+* [Groupon Engineering Blog](https://engineering.groupon.com/)
+* [Heroku Engineering Blog](https://engineering.heroku.com/)
+* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
+* [High Scalability](http://highscalability.com/)
+* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
+* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
+* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
+* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
+* [Microsoft Engineering](https://engineering.microsoft.com/)
+* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
+* [Netflix Tech Blog](http://techblog.netflix.com/)
+* [Paypal Developer Blog](https://medium.com/paypal-engineering)
+* [Pinterest Engineering Blog](https://medium.com/@Pinterest_Engineering)
+* [Reddit Blog](http://www.redditblog.com/)
+* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
+* [Slack Engineering Blog](https://slack.engineering/)
+* [Spotify Labs](https://labs.spotify.com/)
+* [Twilio Engineering Blog](http://www.twilio.com/engineering)
+* [Twitter Engineering](https://blog.twitter.com/engineering/)
+* [Uber Engineering Blog](http://eng.uber.com/)
+* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
+* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
+* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
#### Source(s) and further reading
Looking to add a blog? To avoid duplicating work, consider adding your company blog to the following repo:
-* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
+* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
## Under development
-Interested in adding a section or helping complete one in-progress? [Contribute](#contributing)!
+Interested in adding a section or helping complete one in-progress? [Contribute](#contributing) !
* Distributed computing with MapReduce
* Consistent hashing
* Scatter gather
-* [Contribute](#contributing)
+* [Contribute](#contributing)
## Credits
@@ -1811,28 +1811,28 @@ Credits and sources are provided throughout this repo.
Special thanks to:
-* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
-* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
-* [High scalability](http://highscalability.com/)
-* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
-* [shashank88/system_design](https://github.com/shashank88/system_design)
-* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
-* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
-* [A distributed systems reading list](http://dancres.github.io/Pages/)
-* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
+* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
+* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
+* [High scalability](http://highscalability.com/)
+* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
+* [shashank88/system_design](https://github.com/shashank88/system_design)
+* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
+* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
+* [A distributed systems reading list](http://dancres.github.io/Pages/)
+* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
Feel free to contact me to discuss any issues, questions, or comments.
-My contact info can be found on my [GitHub page](https://github.com/donnemartin).
+My contact info can be found on my [GitHub page](https://github.com/donnemartin) .
## License
-*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook).*
+*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook) .*
Copyright 2017 Donne Martin
- Creative Commons Attribution 4.0 International License (CC BY 4.0)
+ Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/
diff --git a/TRANSLATIONS.md b/TRANSLATIONS.md
index 5bfae9af..5ff60016 100644
--- a/TRANSLATIONS.md
+++ b/TRANSLATIONS.md
@@ -4,7 +4,7 @@
## Contributing
-See the [Contributing Guidelines](CONTRIBUTING.md).
+See the [Contributing Guidelines](CONTRIBUTING.md) .
## Translation Statuses
@@ -14,7 +14,7 @@ See the [Contributing Guidelines](CONTRIBUTING.md).
**Within the past 2 months, there has been 1) No active work in the translation fork, and 2) No discussions from previous maintainer(s) in the discussion thread.*
-Languages not listed here have not been started, [contribute](CONTRIBUTING.md)!
+Languages not listed here have not been started, [contribute](CONTRIBUTING.md) !
Languages are grouped by status and are listed in alphabetical order.
@@ -22,33 +22,33 @@ Languages are grouped by status and are listed in alphabetical order.
### 🎉 Japanese
-* [README-ja.md](README-ja.md)
-* Maintainer(s): [@tsukukobaan](https://github.com/tsukukobaan) 👏
+* [README-ja.md](README-ja.md)
+* Maintainer(s) : [@tsukukobaan](https://github.com/tsukukobaan) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/100
### 🎉 Simplified Chinese
-* [zh-Hans.md](README-zh-Hans.md)
-* Maintainer(s): [@sqrthree](https://github.com/sqrthree) 👏
+* [zh-Hans.md](README-zh-Hans.md)
+* Maintainer(s) : [@sqrthree](https://github.com/sqrthree) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/38
### 🎉 Traditional Chinese
-* [README-zh-TW.md](README-zh-TW.md)
-* Maintainer(s): [@kevingo](https://github.com/kevingo) 👏
+* [README-zh-TW.md](README-zh-TW.md)
+* Maintainer(s) : [@kevingo](https://github.com/kevingo) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/88
## In Progress
### ⏳ Korean
-* Maintainer(s): [@bonomoon](https://github.com/bonomoon), [@mingrammer](https://github.com/mingrammer) 👏
+* Maintainer(s) : [@bonomoon](https://github.com/bonomoon) , [@mingrammer](https://github.com/mingrammer) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/102
* Translation Fork: https://github.com/bonomoon/system-design-primer, https://github.com/donnemartin/system-design-primer/pull/103
### ⏳ Russian
-* Maintainer(s): [@voitau](https://github.com/voitau), [@DmitryOlkhovoi](https://github.com/DmitryOlkhovoi) 👏
+* Maintainer(s) : [@voitau](https://github.com/voitau) , [@DmitryOlkhovoi](https://github.com/DmitryOlkhovoi) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/87
* Translation Fork: https://github.com/voitau/system-design-primer/blob/master/README-ru.md
@@ -58,106 +58,106 @@ Languages are grouped by status and are listed in alphabetical order.
* If you're able to commit to being an active maintainer for a language, let us know in the discussion thread for your language and update this file with a pull request.
* If you're listed here as a "Previous Maintainer" but can commit to being an active maintainer, also let us know.
-* See the [Contributing Guidelines](CONTRIBUTING.md).
+* See the [Contributing Guidelines](CONTRIBUTING.md) .
### ❗ Arabic
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@aymns](https://github.com/aymns)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@aymns](https://github.com/aymns)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/170
* Translation Fork: https://github.com/aymns/system-design-primer/blob/develop/README-ar.md
### ❗ Bengali
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@nutboltu](https://github.com/nutboltu)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@nutboltu](https://github.com/nutboltu)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/220
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/240
### ❗ Brazilian Portuguese
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@IuryAlves](https://github.com/IuryAlves)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@IuryAlves](https://github.com/IuryAlves)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/40
* Translation Fork: https://github.com/IuryAlves/system-design-primer, https://github.com/donnemartin/system-design-primer/pull/67
### ❗ French
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@spuyet](https://github.com/spuyet)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@spuyet](https://github.com/spuyet)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/250
* Translation Fork: https://github.com/spuyet/system-design-primer/blob/add-french-translation/README-fr.md
### ❗ German
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@Allaman](https://github.com/Allaman)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@Allaman](https://github.com/Allaman)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/186
* Translation Fork: None
### ❗ Greek
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@Belonias](https://github.com/Belonias)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@Belonias](https://github.com/Belonias)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/130
* Translation Fork: None
### ❗ Hebrew
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@EladLeev](https://github.com/EladLeev)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@EladLeev](https://github.com/EladLeev)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/272
* Translation Fork: https://github.com/EladLeev/system-design-primer/tree/he-translate
### ❗ Italian
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@pgoodjohn](https://github.com/pgoodjohn)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@pgoodjohn](https://github.com/pgoodjohn)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/104
* Translation Fork: https://github.com/pgoodjohn/system-design-primer
### ❗ Persian
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@hadisinaee](https://github.com/hadisinaee)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@hadisinaee](https://github.com/hadisinaee)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/pull/112
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/112
### ❗ Spanish
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@eamanu](https://github.com/eamanu)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@eamanu](https://github.com/eamanu)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/136
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/189
### ❗ Thai
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@iphayao](https://github.com/iphayao)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@iphayao](https://github.com/iphayao)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/187
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/221
### ❗ Turkish
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@hwclass](https://github.com/hwclass), [@canerbaran](https://github.com/canerbaran), [@emrahtoy](https://github.com/emrahtoy)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@hwclass](https://github.com/hwclass) , [@canerbaran](https://github.com/canerbaran) , [@emrahtoy](https://github.com/emrahtoy)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/39
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/239
### ❗ Ukrainian
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@Kietzmann](https://github.com/Kietzmann), [@Acarus](https://github.com/Acarus)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@Kietzmann](https://github.com/Kietzmann) , [@Acarus](https://github.com/Acarus)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/248
* Translation Fork: https://github.com/Acarus/system-design-primer
### ❗ Vietnamese
-* Maintainer(s): **Help Wanted** ✋
- * Previous Maintainer(s): [@tranlyvu](https://github.com/tranlyvu), [@duynguyenhoang](https://github.com/duynguyenhoang)
+* Maintainer(s) : **Help Wanted** ✋
+ * Previous Maintainer(s) : [@tranlyvu](https://github.com/tranlyvu) , [@duynguyenhoang](https://github.com/duynguyenhoang)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/127
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/241, https://github.com/donnemartin/system-design-primer/pull/327
## Not Started
-Languages not listed here have not been started, [contribute](CONTRIBUTING.md)!
+Languages not listed here have not been started, [contribute](CONTRIBUTING.md) !
diff --git a/generate-epub.sh b/generate-epub.sh
index 18690fbb..d4032189 100755
--- a/generate-epub.sh
+++ b/generate-epub.sh
@@ -38,7 +38,7 @@ generate () {
check_dependencies () {
for dependency in "${dependencies[@]}"
do
- if ! [ -x "$(command -v $dependency)" ]; then
+ if ! [ -x "$(command -v $dependency) " ]; then
echo "Error: $dependency is not installed." >&2
exit 1
fi
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
new file mode 100644
index 00000000..ac1fcf4d
--- /dev/null
+++ b/resources/noat.cards/Application layer.md
@@ -0,0 +1,42 @@
++++
+noatcards = True
+isdraft = False
++++
+
+Application layer
+-----------------
+
+### Application layer - Introduction
+
+[ ](https://camo.githubusercontent.com/feeb549c5b6e94f65c613635f7166dc26e0c7de7/687474703a2f2f692e696d6775722e636f6d2f7942355359776d2e706e67)
+
+_[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer) _
+
+Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers.
+
+The single responsibility principle advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
+
+Workers in the application layer also help enable [asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism) .
+
+### [](https://github.com/donnemartin/system-design-primer#microservices) Microservices
+
+Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-definied, lightweight mechanism to serve a business goal. [1](https://smartbear.com/learn/api-design/what-are-microservices)
+
+Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
+
+### [](https://github.com/donnemartin/system-design-primer#service-discovery) Service Discovery
+
+Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, ports, etc.
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-application-layer) Disadvantage(s) : application layer
+
+* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
+* Microservices can add complexity in terms of deployments and operations.
+
+### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-9) Source(s) and further reading
+
+* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
+* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
+* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
\ No newline at end of file
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
new file mode 100644
index 00000000..946768d5
--- /dev/null
+++ b/resources/noat.cards/Asynchronism.md
@@ -0,0 +1,50 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Asynchronism
+
+[ ](https://camo.githubusercontent.com/c01ec137453216bbc188e3a8f16da39ec9131234/687474703a2f2f692e696d6775722e636f6d2f353447597353782e706e67)
+_[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer) _
+
+Asynchronous workflows help reduce request times for expensive operations that would otherwise be performed in-line. They can also help by doing time-consuming work in advance, such as periodic aggregation of data.
+
+### [](https://github.com/donnemartin/system-design-primer#message-queues) Message queues
+
+Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
+
+* An application publishes a job to the queue, then notifies the user of job status
+* A worker picks up the job from the queue, processes it, then signals the job is complete
+
+The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
+
+Redis is useful as a simple message broker but messages can be lost.
+
+RabbitMQ is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
+
+Amazon SQS, is hosted but can have high latency and has the possibility of messages being delivered twice.
+
+### [](https://github.com/donnemartin/system-design-primer#task-queues) Task queues
+
+Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
+
+Celery has support for scheduling and primarily has python support.
+
+### [](https://github.com/donnemartin/system-design-primer#back-pressure) Back pressure
+
+If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-asynchronism) Disadvantage(s) : asynchronism
+
+* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
+
+### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-11) Source(s) and further reading
+
+* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
+* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
+* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
+* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
+
+[](https://github.com/donnemartin/system-design-primer#communication)
+---------------------------------------------------------------------
\ No newline at end of file
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
new file mode 100644
index 00000000..3cce966d
--- /dev/null
+++ b/resources/noat.cards/Availability patterns.md
@@ -0,0 +1,69 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Availability patterns
+
+There are two main patterns to support high availability:fail-over and replication.
+
+### [](https://github.com/donnemartin/system-design-primer#active-passive) Active-passive (Fail-Over)
+
+With active-passive fail-over, heartbeats are sent between the active and the passive server on standby. If the heartbeat is interrupted, the passive server takes over the active's IP address and resumes service.
+
+The length of downtime is determined by whether the passive server is already running in 'hot' standy or whether it needs to start up from 'cold' standby. Only the active server handles traffic.
+
+Active-passive failover can also be referred to as master-slave failover.
+
+### [](https://github.com/donnemartin/system-design-primer#active-active) Active-active (Fail-Over)
+
+In active-active, both servers are managing traffic, spreading the load between them.
+
+If the servers are public-facing, the DNS would need to know about the public IPs of both servers. If the servers are internal-facing, application logic would need to know about both servers.
+
+Active-active failover can also be referred to as master-master failover.
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-failover) Disadvantage(s) : failover
+
+* Fail-over adds more hardware and additional complexity.
+* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
+
+
+### [](https://github.com/donnemartin/system-design-primer#master-slave-and-master-master) Master-slave replication
+
+The master serves reads and writes, replicating writes to one or more slaves, which serve only reads. Slaves can also replicate to additional slaves in a tree-like fashion. If the master goes offline, the system can continue to operate in read-only mode until a slave is promoted to a master or a new master is provisioned.
+
+[ ](https://camo.githubusercontent.com/6a097809b9690236258747d969b1d3e0d93bb8ca/687474703a2f2f692e696d6775722e636f6d2f4339696f47746e2e706e67)
+_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-master-slave-replication) Disadvantage(s) : master-slave replication
+
+* Additional logic is needed to promote a slave to a master.
+* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+
+### [](https://github.com/donnemartin/system-design-primer#master-master-replication) Master-master replication
+
+Both masters serve reads and writes and coordinate with each other on writes. If either master goes down, the system can continue to operate with both reads and writes.
+
+[ ](https://camo.githubusercontent.com/5862604b102ee97d85f86f89edda44bde85a5b7f/687474703a2f2f692e696d6775722e636f6d2f6b7241484c47672e706e67)
+_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-master-master-replication) Disadvantage(s) : master-master replication
+
+* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
+* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
+* Conflict resolution comes more into play as more write nodes are added and as latency increases.
+* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+
+### [](https://github.com/donnemartin/system-design-primer#disadvantages-replication) Disadvantage(s) : replication
+
+* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
+* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
+* The more read slaves, the more you have to replicate, which leads to greater replication lag.
+* On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
+* Replication adds more hardware and additional complexity.
+
+### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-replication) Source(s) and further reading: replication
+
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
\ No newline at end of file
diff --git a/solutions/object_oriented_design/call_center/call_center.ipynb b/solutions/object_oriented_design/call_center/call_center.ipynb
index c540c6a6..97d60d51 100644
--- a/solutions/object_oriented_design/call_center/call_center.ipynb
+++ b/solutions/object_oriented_design/call_center/call_center.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -67,118 +67,118 @@
"from enum import Enum\n",
"\n",
"\n",
- "class Rank(Enum):\n",
+ "class Rank(Enum) :\n",
"\n",
" OPERATOR = 0\n",
" SUPERVISOR = 1\n",
" DIRECTOR = 2\n",
"\n",
"\n",
- "class Employee(metaclass=ABCMeta):\n",
+ "class Employee(metaclass=ABCMeta) :\n",
"\n",
- " def __init__(self, employee_id, name, rank, call_center):\n",
+ " def __init__(self, employee_id, name, rank, call_center) :\n",
" self.employee_id = employee_id\n",
" self.name = name\n",
" self.rank = rank\n",
" self.call = None\n",
" self.call_center = call_center\n",
"\n",
- " def take_call(self, call):\n",
+ " def take_call(self, call) :\n",
" \"\"\"Assume the employee will always successfully take the call.\"\"\"\n",
" self.call = call\n",
" self.call.employee = self\n",
" self.call.state = CallState.IN_PROGRESS\n",
"\n",
- " def complete_call(self):\n",
+ " def complete_call(self) :\n",
" self.call.state = CallState.COMPLETE\n",
- " self.call_center.notify_call_completed(self.call)\n",
+ " self.call_center.notify_call_completed(self.call) \n",
"\n",
" @abstractmethod\n",
- " def escalate_call(self):\n",
+ " def escalate_call(self) :\n",
" pass\n",
"\n",
- " def _escalate_call(self):\n",
+ " def _escalate_call(self) :\n",
" self.call.state = CallState.READY\n",
" call = self.call\n",
" self.call = None\n",
- " self.call_center.notify_call_escalated(call)\n",
+ " self.call_center.notify_call_escalated(call) \n",
"\n",
"\n",
- "class Operator(Employee):\n",
+ "class Operator(Employee) :\n",
"\n",
- " def __init__(self, employee_id, name):\n",
- " super(Operator, self).__init__(employee_id, name, Rank.OPERATOR)\n",
+ " def __init__(self, employee_id, name) :\n",
+ " super(Operator, self) .__init__(employee_id, name, Rank.OPERATOR) \n",
"\n",
- " def escalate_call(self):\n",
+ " def escalate_call(self) :\n",
" self.call.level = Rank.SUPERVISOR\n",
- " self._escalate_call()\n",
+ " self._escalate_call() \n",
"\n",
"\n",
- "class Supervisor(Employee):\n",
+ "class Supervisor(Employee) :\n",
"\n",
- " def __init__(self, employee_id, name):\n",
- " super(Operator, self).__init__(employee_id, name, Rank.SUPERVISOR)\n",
+ " def __init__(self, employee_id, name) :\n",
+ " super(Operator, self) .__init__(employee_id, name, Rank.SUPERVISOR) \n",
"\n",
- " def escalate_call(self):\n",
+ " def escalate_call(self) :\n",
" self.call.level = Rank.DIRECTOR\n",
- " self._escalate_call()\n",
+ " self._escalate_call() \n",
"\n",
"\n",
- "class Director(Employee):\n",
+ "class Director(Employee) :\n",
"\n",
- " def __init__(self, employee_id, name):\n",
- " super(Operator, self).__init__(employee_id, name, Rank.DIRECTOR)\n",
+ " def __init__(self, employee_id, name) :\n",
+ " super(Operator, self) .__init__(employee_id, name, Rank.DIRECTOR) \n",
"\n",
- " def escalate_call(self):\n",
- " raise NotImplemented('Directors must be able to handle any call')\n",
+ " def escalate_call(self) :\n",
+ " raise NotImplemented('Directors must be able to handle any call') \n",
"\n",
"\n",
- "class CallState(Enum):\n",
+ "class CallState(Enum) :\n",
"\n",
" READY = 0\n",
" IN_PROGRESS = 1\n",
" COMPLETE = 2\n",
"\n",
"\n",
- "class Call(object):\n",
+ "class Call(object) :\n",
"\n",
- " def __init__(self, rank):\n",
+ " def __init__(self, rank) :\n",
" self.state = CallState.READY\n",
" self.rank = rank\n",
" self.employee = None\n",
"\n",
"\n",
- "class CallCenter(object):\n",
+ "class CallCenter(object) :\n",
"\n",
- " def __init__(self, operators, supervisors, directors):\n",
+ " def __init__(self, operators, supervisors, directors) :\n",
" self.operators = operators\n",
" self.supervisors = supervisors\n",
" self.directors = directors\n",
- " self.queued_calls = deque()\n",
+ " self.queued_calls = deque() \n",
"\n",
- " def dispatch_call(self, call):\n",
- " if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR):\n",
+ " def dispatch_call(self, call) :\n",
+ " if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR) :\n",
" raise ValueError('Invalid call rank: {}'.format(call.rank))\n",
" employee = None\n",
" if call.rank == Rank.OPERATOR:\n",
- " employee = self._dispatch_call(call, self.operators)\n",
+ " employee = self._dispatch_call(call, self.operators) \n",
" if call.rank == Rank.SUPERVISOR or employee is None:\n",
- " employee = self._dispatch_call(call, self.supervisors)\n",
+ " employee = self._dispatch_call(call, self.supervisors) \n",
" if call.rank == Rank.DIRECTOR or employee is None:\n",
- " employee = self._dispatch_call(call, self.directors)\n",
+ " employee = self._dispatch_call(call, self.directors) \n",
" if employee is None:\n",
- " self.queued_calls.append(call)\n",
+ " self.queued_calls.append(call) \n",
"\n",
- " def _dispatch_call(self, call, employees):\n",
+ " def _dispatch_call(self, call, employees) :\n",
" for employee in employees:\n",
" if employee.call is None:\n",
- " employee.take_call(call)\n",
+ " employee.take_call(call) \n",
" return employee\n",
" return None\n",
"\n",
- " def notify_call_escalated(self, call): # ...\n",
- " def notify_call_completed(self, call): # ...\n",
- " def dispatch_queued_call_to_newly_freed_employee(self, call, employee): # ..."
+ " def notify_call_escalated(self, call) : # ...\n",
+ " def notify_call_completed(self, call) : # ...\n",
+ " def dispatch_queued_call_to_newly_freed_employee(self, call, employee) : # ..."
]
}
],
diff --git a/solutions/object_oriented_design/call_center/call_center.py b/solutions/object_oriented_design/call_center/call_center.py
index 1d5e7bc6..98990c89 100644
--- a/solutions/object_oriented_design/call_center/call_center.py
+++ b/solutions/object_oriented_design/call_center/call_center.py
@@ -3,120 +3,120 @@ from collections import deque
from enum import Enum
-class Rank(Enum):
+class Rank(Enum) :
OPERATOR = 0
SUPERVISOR = 1
DIRECTOR = 2
-class Employee(metaclass=ABCMeta):
+class Employee(metaclass=ABCMeta) :
- def __init__(self, employee_id, name, rank, call_center):
+ def __init__(self, employee_id, name, rank, call_center) :
self.employee_id = employee_id
self.name = name
self.rank = rank
self.call = None
self.call_center = call_center
- def take_call(self, call):
+ def take_call(self, call) :
"""Assume the employee will always successfully take the call."""
self.call = call
self.call.employee = self
self.call.state = CallState.IN_PROGRESS
- def complete_call(self):
+ def complete_call(self) :
self.call.state = CallState.COMPLETE
- self.call_center.notify_call_completed(self.call)
+ self.call_center.notify_call_completed(self.call)
@abstractmethod
- def escalate_call(self):
+ def escalate_call(self) :
pass
- def _escalate_call(self):
+ def _escalate_call(self) :
self.call.state = CallState.READY
call = self.call
self.call = None
- self.call_center.notify_call_escalated(call)
+ self.call_center.notify_call_escalated(call)
-class Operator(Employee):
+class Operator(Employee) :
- def __init__(self, employee_id, name):
- super(Operator, self).__init__(employee_id, name, Rank.OPERATOR)
+ def __init__(self, employee_id, name) :
+ super(Operator, self) .__init__(employee_id, name, Rank.OPERATOR)
- def escalate_call(self):
+ def escalate_call(self) :
self.call.level = Rank.SUPERVISOR
- self._escalate_call()
+ self._escalate_call()
-class Supervisor(Employee):
+class Supervisor(Employee) :
- def __init__(self, employee_id, name):
- super(Operator, self).__init__(employee_id, name, Rank.SUPERVISOR)
+ def __init__(self, employee_id, name) :
+ super(Operator, self) .__init__(employee_id, name, Rank.SUPERVISOR)
- def escalate_call(self):
+ def escalate_call(self) :
self.call.level = Rank.DIRECTOR
- self._escalate_call()
+ self._escalate_call()
-class Director(Employee):
+class Director(Employee) :
- def __init__(self, employee_id, name):
- super(Operator, self).__init__(employee_id, name, Rank.DIRECTOR)
+ def __init__(self, employee_id, name) :
+ super(Operator, self) .__init__(employee_id, name, Rank.DIRECTOR)
- def escalate_call(self):
- raise NotImplementedError('Directors must be able to handle any call')
+ def escalate_call(self) :
+ raise NotImplementedError('Directors must be able to handle any call')
-class CallState(Enum):
+class CallState(Enum) :
READY = 0
IN_PROGRESS = 1
COMPLETE = 2
-class Call(object):
+class Call(object) :
- def __init__(self, rank):
+ def __init__(self, rank) :
self.state = CallState.READY
self.rank = rank
self.employee = None
-class CallCenter(object):
+class CallCenter(object) :
- def __init__(self, operators, supervisors, directors):
+ def __init__(self, operators, supervisors, directors) :
self.operators = operators
self.supervisors = supervisors
self.directors = directors
- self.queued_calls = deque()
+ self.queued_calls = deque()
- def dispatch_call(self, call):
- if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR):
+ def dispatch_call(self, call) :
+ if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR) :
raise ValueError('Invalid call rank: {}'.format(call.rank))
employee = None
if call.rank == Rank.OPERATOR:
- employee = self._dispatch_call(call, self.operators)
+ employee = self._dispatch_call(call, self.operators)
if call.rank == Rank.SUPERVISOR or employee is None:
- employee = self._dispatch_call(call, self.supervisors)
+ employee = self._dispatch_call(call, self.supervisors)
if call.rank == Rank.DIRECTOR or employee is None:
- employee = self._dispatch_call(call, self.directors)
+ employee = self._dispatch_call(call, self.directors)
if employee is None:
- self.queued_calls.append(call)
+ self.queued_calls.append(call)
- def _dispatch_call(self, call, employees):
+ def _dispatch_call(self, call, employees) :
for employee in employees:
if employee.call is None:
- employee.take_call(call)
+ employee.take_call(call)
return employee
return None
- def notify_call_escalated(self, call):
+ def notify_call_escalated(self, call) :
pass
- def notify_call_completed(self, call):
+ def notify_call_completed(self, call) :
pass
- def dispatch_queued_call_to_newly_freed_employee(self, call, employee):
+ def dispatch_queued_call_to_newly_freed_employee(self, call, employee) :
pass
diff --git a/solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb b/solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb
index 1a9bc1c5..49f2c768 100644
--- a/solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb
+++ b/solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -57,7 +57,7 @@
"import sys\n",
"\n",
"\n",
- "class Suit(Enum):\n",
+ "class Suit(Enum) :\n",
"\n",
" HEART = 0\n",
" DIAMOND = 1\n",
@@ -65,100 +65,100 @@
" SPADE = 3\n",
"\n",
"\n",
- "class Card(metaclass=ABCMeta):\n",
+ "class Card(metaclass=ABCMeta) :\n",
"\n",
- " def __init__(self, value, suit):\n",
+ " def __init__(self, value, suit) :\n",
" self.value = value\n",
" self.suit = suit\n",
" self.is_available = True\n",
"\n",
" @property\n",
" @abstractmethod\n",
- " def value(self):\n",
+ " def value(self) :\n",
" pass\n",
"\n",
" @value.setter\n",
" @abstractmethod\n",
- " def value(self, other):\n",
+ " def value(self, other) :\n",
" pass\n",
"\n",
"\n",
- "class BlackJackCard(Card):\n",
+ "class BlackJackCard(Card) :\n",
"\n",
- " def __init__(self, value, suit):\n",
- " super(BlackJackCard, self).__init__(value, suit)\n",
+ " def __init__(self, value, suit) :\n",
+ " super(BlackJackCard, self) .__init__(value, suit) \n",
"\n",
- " def is_ace(self):\n",
+ " def is_ace(self) :\n",
" return self._value == 1\n",
"\n",
- " def is_face_card(self):\n",
+ " def is_face_card(self) :\n",
" \"\"\"Jack = 11, Queen = 12, King = 13\"\"\"\n",
" return 10 < self._value <= 13\n",
"\n",
" @property\n",
- " def value(self):\n",
+ " def value(self) :\n",
" if self.is_ace() == 1:\n",
" return 1\n",
- " elif self.is_face_card():\n",
+ " elif self.is_face_card() :\n",
" return 10\n",
" else:\n",
" return self._value\n",
"\n",
" @value.setter\n",
- " def value(self, new_value):\n",
+ " def value(self, new_value) :\n",
" if 1 <= new_value <= 13:\n",
" self._value = new_value\n",
" else:\n",
" raise ValueError('Invalid card value: {}'.format(new_value))\n",
"\n",
"\n",
- "class Hand(object):\n",
+ "class Hand(object) :\n",
"\n",
- " def __init__(self, cards):\n",
+ " def __init__(self, cards) :\n",
" self.cards = cards\n",
"\n",
- " def add_card(self, card):\n",
- " self.cards.append(card)\n",
+ " def add_card(self, card) :\n",
+ " self.cards.append(card) \n",
"\n",
- " def score(self):\n",
+ " def score(self) :\n",
" total_value = 0\n",
" for card in self.cards:\n",
" total_value += card.value\n",
" return total_value\n",
"\n",
"\n",
- "class BlackJackHand(Hand):\n",
+ "class BlackJackHand(Hand) :\n",
"\n",
" BLACKJACK = 21\n",
"\n",
- " def __init__(self, cards):\n",
- " super(BlackJackHand, self).__init__(cards)\n",
+ " def __init__(self, cards) :\n",
+ " super(BlackJackHand, self) .__init__(cards) \n",
"\n",
- " def score(self):\n",
+ " def score(self) :\n",
" min_over = sys.MAXSIZE\n",
" max_under = -sys.MAXSIZE\n",
- " for score in self.possible_scores():\n",
+ " for score in self.possible_scores() :\n",
" if self.BLACKJACK < score < min_over:\n",
" min_over = score\n",
" elif max_under < score <= self.BLACKJACK:\n",
" max_under = score\n",
" return max_under if max_under != -sys.MAXSIZE else min_over\n",
"\n",
- " def possible_scores(self):\n",
+ " def possible_scores(self) :\n",
" \"\"\"Return a list of possible scores, taking Aces into account.\"\"\"\n",
" # ...\n",
"\n",
"\n",
- "class Deck(object):\n",
+ "class Deck(object) :\n",
"\n",
- " def __init__(self, cards):\n",
+ " def __init__(self, cards) :\n",
" self.cards = cards\n",
" self.deal_index = 0\n",
"\n",
- " def remaining_cards(self):\n",
+ " def remaining_cards(self) :\n",
" return len(self.cards) - deal_index\n",
"\n",
- " def deal_card():\n",
+ " def deal_card() :\n",
" try:\n",
" card = self.cards[self.deal_index]\n",
" card.is_available = False\n",
@@ -167,7 +167,7 @@
" return None\n",
" return card\n",
"\n",
- " def shuffle(self): # ..."
+ " def shuffle(self) : # ..."
]
}
],
diff --git a/solutions/object_oriented_design/deck_of_cards/deck_of_cards.py b/solutions/object_oriented_design/deck_of_cards/deck_of_cards.py
index a4708758..48eea338 100644
--- a/solutions/object_oriented_design/deck_of_cards/deck_of_cards.py
+++ b/solutions/object_oriented_design/deck_of_cards/deck_of_cards.py
@@ -3,7 +3,7 @@ from enum import Enum
import sys
-class Suit(Enum):
+class Suit(Enum) :
HEART = 0
DIAMOND = 1
@@ -11,100 +11,100 @@ class Suit(Enum):
SPADE = 3
-class Card(metaclass=ABCMeta):
+class Card(metaclass=ABCMeta) :
- def __init__(self, value, suit):
+ def __init__(self, value, suit) :
self.value = value
self.suit = suit
self.is_available = True
@property
@abstractmethod
- def value(self):
+ def value(self) :
pass
@value.setter
@abstractmethod
- def value(self, other):
+ def value(self, other) :
pass
-class BlackJackCard(Card):
+class BlackJackCard(Card) :
- def __init__(self, value, suit):
- super(BlackJackCard, self).__init__(value, suit)
+ def __init__(self, value, suit) :
+ super(BlackJackCard, self) .__init__(value, suit)
- def is_ace(self):
+ def is_ace(self) :
return True if self._value == 1 else False
- def is_face_card(self):
+ def is_face_card(self) :
"""Jack = 11, Queen = 12, King = 13"""
return True if 10 < self._value <= 13 else False
@property
- def value(self):
+ def value(self) :
if self.is_ace() == 1:
return 1
- elif self.is_face_card():
+ elif self.is_face_card() :
return 10
else:
return self._value
@value.setter
- def value(self, new_value):
+ def value(self, new_value) :
if 1 <= new_value <= 13:
self._value = new_value
else:
raise ValueError('Invalid card value: {}'.format(new_value))
-class Hand(object):
+class Hand(object) :
- def __init__(self, cards):
+ def __init__(self, cards) :
self.cards = cards
- def add_card(self, card):
- self.cards.append(card)
+ def add_card(self, card) :
+ self.cards.append(card)
- def score(self):
+ def score(self) :
total_value = 0
for card in self.cards:
total_value += card.value
return total_value
-class BlackJackHand(Hand):
+class BlackJackHand(Hand) :
BLACKJACK = 21
- def __init__(self, cards):
- super(BlackJackHand, self).__init__(cards)
+ def __init__(self, cards) :
+ super(BlackJackHand, self) .__init__(cards)
- def score(self):
+ def score(self) :
min_over = sys.MAXSIZE
max_under = -sys.MAXSIZE
- for score in self.possible_scores():
+ for score in self.possible_scores() :
if self.BLACKJACK < score < min_over:
min_over = score
elif max_under < score <= self.BLACKJACK:
max_under = score
return max_under if max_under != -sys.MAXSIZE else min_over
- def possible_scores(self):
+ def possible_scores(self) :
"""Return a list of possible scores, taking Aces into account."""
pass
-class Deck(object):
+class Deck(object) :
- def __init__(self, cards):
+ def __init__(self, cards) :
self.cards = cards
self.deal_index = 0
- def remaining_cards(self):
+ def remaining_cards(self) :
return len(self.cards) - self.deal_index
- def deal_card(self):
+ def deal_card(self) :
try:
card = self.cards[self.deal_index]
card.is_available = False
@@ -113,5 +113,5 @@ class Deck(object):
return None
return card
- def shuffle(self):
+ def shuffle(self) :
pass
diff --git a/solutions/object_oriented_design/hash_table/hash_map.ipynb b/solutions/object_oriented_design/hash_table/hash_map.ipynb
index 92713d94..57aba3a4 100644
--- a/solutions/object_oriented_design/hash_table/hash_map.ipynb
+++ b/solutions/object_oriented_design/hash_table/hash_map.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -56,44 +56,44 @@
],
"source": [
"%%writefile hash_map.py\n",
- "class Item(object):\n",
+ "class Item(object) :\n",
"\n",
- " def __init__(self, key, value):\n",
+ " def __init__(self, key, value) :\n",
" self.key = key\n",
" self.value = value\n",
"\n",
"\n",
- "class HashTable(object):\n",
+ "class HashTable(object) :\n",
"\n",
- " def __init__(self, size):\n",
+ " def __init__(self, size) :\n",
" self.size = size\n",
- " self.table = [[] for _ in range(self.size)]\n",
+ " self.table = [[] for _ in range(self.size) ]\n",
"\n",
- " def _hash_function(self, key):\n",
+ " def _hash_function(self, key) :\n",
" return key % self.size\n",
"\n",
- " def set(self, key, value):\n",
- " hash_index = self._hash_function(key)\n",
+ " def set(self, key, value) :\n",
+ " hash_index = self._hash_function(key) \n",
" for item in self.table[hash_index]:\n",
" if item.key == key:\n",
" item.value = value\n",
" return\n",
" self.table[hash_index].append(Item(key, value))\n",
"\n",
- " def get(self, key):\n",
- " hash_index = self._hash_function(key)\n",
+ " def get(self, key) :\n",
+ " hash_index = self._hash_function(key) \n",
" for item in self.table[hash_index]:\n",
" if item.key == key:\n",
" return item.value\n",
- " raise KeyError('Key not found')\n",
+ " raise KeyError('Key not found') \n",
"\n",
- " def remove(self, key):\n",
- " hash_index = self._hash_function(key)\n",
- " for index, item in enumerate(self.table[hash_index]):\n",
+ " def remove(self, key) :\n",
+ " hash_index = self._hash_function(key) \n",
+ " for index, item in enumerate(self.table[hash_index]) :\n",
" if item.key == key:\n",
" del self.table[hash_index][index]\n",
" return\n",
- " raise KeyError('Key not found')"
+ " raise KeyError('Key not found') "
]
}
],
diff --git a/solutions/object_oriented_design/hash_table/hash_map.py b/solutions/object_oriented_design/hash_table/hash_map.py
index 33d9a35d..feb868df 100644
--- a/solutions/object_oriented_design/hash_table/hash_map.py
+++ b/solutions/object_oriented_design/hash_table/hash_map.py
@@ -1,38 +1,38 @@
-class Item(object):
+class Item(object) :
- def __init__(self, key, value):
+ def __init__(self, key, value) :
self.key = key
self.value = value
-class HashTable(object):
+class HashTable(object) :
- def __init__(self, size):
+ def __init__(self, size) :
self.size = size
- self.table = [[] for _ in range(self.size)]
+ self.table = [[] for _ in range(self.size) ]
- def _hash_function(self, key):
+ def _hash_function(self, key) :
return key % self.size
- def set(self, key, value):
- hash_index = self._hash_function(key)
+ def set(self, key, value) :
+ hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
- def get(self, key):
- hash_index = self._hash_function(key)
+ def get(self, key) :
+ hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
- raise KeyError('Key not found')
+ raise KeyError('Key not found')
- def remove(self, key):
- hash_index = self._hash_function(key)
- for index, item in enumerate(self.table[hash_index]):
+ def remove(self, key) :
+ hash_index = self._hash_function(key)
+ for index, item in enumerate(self.table[hash_index]) :
if item.key == key:
del self.table[hash_index][index]
return
- raise KeyError('Key not found')
+ raise KeyError('Key not found')
diff --git a/solutions/object_oriented_design/lru_cache/lru_cache.ipynb b/solutions/object_oriented_design/lru_cache/lru_cache.ipynb
index cd91da11..6d5a40ef 100644
--- a/solutions/object_oriented_design/lru_cache/lru_cache.ipynb
+++ b/solutions/object_oriented_design/lru_cache/lru_cache.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -52,67 +52,67 @@
],
"source": [
"%%writefile lru_cache.py\n",
- "class Node(object):\n",
+ "class Node(object) :\n",
"\n",
- " def __init__(self, results):\n",
+ " def __init__(self, results) :\n",
" self.results = results\n",
" self.prev = None\n",
" self.next = None\n",
"\n",
"\n",
- "class LinkedList(object):\n",
+ "class LinkedList(object) :\n",
"\n",
- " def __init__(self):\n",
+ " def __init__(self) :\n",
" self.head = None\n",
" self.tail = None\n",
"\n",
- " def move_to_front(self, node): # ...\n",
- " def append_to_front(self, node): # ...\n",
- " def remove_from_tail(self): # ...\n",
+ " def move_to_front(self, node) : # ...\n",
+ " def append_to_front(self, node) : # ...\n",
+ " def remove_from_tail(self) : # ...\n",
"\n",
"\n",
- "class Cache(object):\n",
+ "class Cache(object) :\n",
"\n",
- " def __init__(self, MAX_SIZE):\n",
+ " def __init__(self, MAX_SIZE) :\n",
" self.MAX_SIZE = MAX_SIZE\n",
" self.size = 0\n",
" self.lookup = {} # key: query, value: node\n",
- " self.linked_list = LinkedList()\n",
+ " self.linked_list = LinkedList() \n",
"\n",
- " def get(self, query)\n",
+ " def get(self, query) \n",
" \"\"\"Get the stored query result from the cache.\n",
" \n",
" Accessing a node updates its position to the front of the LRU list.\n",
" \"\"\"\n",
- " node = self.lookup.get(query)\n",
+ " node = self.lookup.get(query) \n",
" if node is None:\n",
" return None\n",
- " self.linked_list.move_to_front(node)\n",
+ " self.linked_list.move_to_front(node) \n",
" return node.results\n",
"\n",
- " def set(self, results, query):\n",
+ " def set(self, results, query) :\n",
" \"\"\"Set the result for the given query key in the cache.\n",
" \n",
" When updating an entry, updates its position to the front of the LRU list.\n",
" If the entry is new and the cache is at capacity, removes the oldest entry\n",
" before the new entry is added.\n",
" \"\"\"\n",
- " node = self.lookup.get(query)\n",
+ " node = self.lookup.get(query) \n",
" if node is not None:\n",
" # Key exists in cache, update the value\n",
" node.results = results\n",
- " self.linked_list.move_to_front(node)\n",
+ " self.linked_list.move_to_front(node) \n",
" else:\n",
" # Key does not exist in cache\n",
" if self.size == self.MAX_SIZE:\n",
" # Remove the oldest entry from the linked list and lookup\n",
- " self.lookup.pop(self.linked_list.tail.query, None)\n",
- " self.linked_list.remove_from_tail()\n",
+ " self.lookup.pop(self.linked_list.tail.query, None) \n",
+ " self.linked_list.remove_from_tail() \n",
" else:\n",
" self.size += 1\n",
" # Add the new key and value\n",
- " new_node = Node(results)\n",
- " self.linked_list.append_to_front(new_node)\n",
+ " new_node = Node(results) \n",
+ " self.linked_list.append_to_front(new_node) \n",
" self.lookup[query] = new_node"
]
}
diff --git a/solutions/object_oriented_design/lru_cache/lru_cache.py b/solutions/object_oriented_design/lru_cache/lru_cache.py
index acee4651..43760127 100644
--- a/solutions/object_oriented_design/lru_cache/lru_cache.py
+++ b/solutions/object_oriented_design/lru_cache/lru_cache.py
@@ -1,66 +1,66 @@
-class Node(object):
+class Node(object) :
- def __init__(self, results):
+ def __init__(self, results) :
self.results = results
self.next = next
-class LinkedList(object):
+class LinkedList(object) :
- def __init__(self):
+ def __init__(self) :
self.head = None
self.tail = None
- def move_to_front(self, node):
+ def move_to_front(self, node) :
pass
- def append_to_front(self, node):
+ def append_to_front(self, node) :
pass
- def remove_from_tail(self):
+ def remove_from_tail(self) :
pass
-class Cache(object):
+class Cache(object) :
- def __init__(self, MAX_SIZE):
+ def __init__(self, MAX_SIZE) :
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
- self.linked_list = LinkedList()
+ self.linked_list = LinkedList()
- def get(self, query):
+ def get(self, query) :
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
"""
- node = self.lookup.get(query)
+ node = self.lookup.get(query)
if node is None:
return None
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
return node.results
- def set(self, results, query):
+ def set(self, results, query) :
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
If the entry is new and the cache is at capacity, removes the oldest entry
before the new entry is added.
"""
- node = self.lookup.get(query)
+ node = self.lookup.get(query)
if node is not None:
# Key exists in cache, update the value
node.results = results
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
- self.lookup.pop(self.linked_list.tail.query, None)
- self.linked_list.remove_from_tail()
+ self.lookup.pop(self.linked_list.tail.query, None)
+ self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
- new_node = Node(results)
- self.linked_list.append_to_front(new_node)
+ new_node = Node(results)
+ self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
diff --git a/solutions/object_oriented_design/online_chat/online_chat.ipynb b/solutions/object_oriented_design/online_chat/online_chat.ipynb
index b9f84ef4..cf0e987d 100644
--- a/solutions/object_oriented_design/online_chat/online_chat.ipynb
+++ b/solutions/object_oriented_design/online_chat/online_chat.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -67,21 +67,21 @@
"from abc import ABCMeta\n",
"\n",
"\n",
- "class UserService(object):\n",
+ "class UserService(object) :\n",
"\n",
- " def __init__(self):\n",
+ " def __init__(self) :\n",
" self.users_by_id = {} # key: user id, value: User\n",
"\n",
- " def add_user(self, user_id, name, pass_hash): # ...\n",
- " def remove_user(self, user_id): # ...\n",
- " def add_friend_request(self, from_user_id, to_user_id): # ...\n",
- " def approve_friend_request(self, from_user_id, to_user_id): # ...\n",
- " def reject_friend_request(self, from_user_id, to_user_id): # ...\n",
+ " def add_user(self, user_id, name, pass_hash) : # ...\n",
+ " def remove_user(self, user_id) : # ...\n",
+ " def add_friend_request(self, from_user_id, to_user_id) : # ...\n",
+ " def approve_friend_request(self, from_user_id, to_user_id) : # ...\n",
+ " def reject_friend_request(self, from_user_id, to_user_id) : # ...\n",
"\n",
"\n",
- "class User(object):\n",
+ "class User(object) :\n",
"\n",
- " def __init__(self, user_id, name, pass_hash):\n",
+ " def __init__(self, user_id, name, pass_hash) :\n",
" self.user_id = user_id\n",
" self.name = name\n",
" self.pass_hash = pass_hash\n",
@@ -91,54 +91,54 @@
" self.received_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest\n",
" self.sent_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest\n",
"\n",
- " def message_user(self, friend_id, message): # ...\n",
- " def message_group(self, group_id, message): # ...\n",
- " def send_friend_request(self, friend_id): # ...\n",
- " def receive_friend_request(self, friend_id): # ...\n",
- " def approve_friend_request(self, friend_id): # ...\n",
- " def reject_friend_request(self, friend_id): # ...\n",
+ " def message_user(self, friend_id, message) : # ...\n",
+ " def message_group(self, group_id, message) : # ...\n",
+ " def send_friend_request(self, friend_id) : # ...\n",
+ " def receive_friend_request(self, friend_id) : # ...\n",
+ " def approve_friend_request(self, friend_id) : # ...\n",
+ " def reject_friend_request(self, friend_id) : # ...\n",
"\n",
"\n",
- "class Chat(metaclass=ABCMeta):\n",
+ "class Chat(metaclass=ABCMeta) :\n",
"\n",
- " def __init__(self, chat_id):\n",
+ " def __init__(self, chat_id) :\n",
" self.chat_id = chat_id\n",
" self.users = []\n",
" self.messages = []\n",
"\n",
"\n",
- "class PrivateChat(Chat):\n",
+ "class PrivateChat(Chat) :\n",
"\n",
- " def __init__(self, first_user, second_user):\n",
- " super(PrivateChat, self).__init__()\n",
- " self.users.append(first_user)\n",
- " self.users.append(second_user)\n",
+ " def __init__(self, first_user, second_user) :\n",
+ " super(PrivateChat, self) .__init__() \n",
+ " self.users.append(first_user) \n",
+ " self.users.append(second_user) \n",
"\n",
"\n",
- "class GroupChat(Chat):\n",
+ "class GroupChat(Chat) :\n",
"\n",
- " def add_user(self, user): # ...\n",
- " def remove_user(self, user): # ... \n",
+ " def add_user(self, user) : # ...\n",
+ " def remove_user(self, user) : # ... \n",
"\n",
"\n",
- "class Message(object):\n",
+ "class Message(object) :\n",
"\n",
- " def __init__(self, message_id, message, timestamp):\n",
+ " def __init__(self, message_id, message, timestamp) :\n",
" self.message_id = message_id\n",
" self.message = message\n",
" self.timestamp = timestamp\n",
"\n",
"\n",
- "class AddRequest(object):\n",
+ "class AddRequest(object) :\n",
"\n",
- " def __init__(self, from_user_id, to_user_id, request_status, timestamp):\n",
+ " def __init__(self, from_user_id, to_user_id, request_status, timestamp) :\n",
" self.from_user_id = from_user_id\n",
" self.to_user_id = to_user_id\n",
" self.request_status = request_status\n",
" self.timestamp = timestamp\n",
"\n",
"\n",
- "class RequestStatus(Enum):\n",
+ "class RequestStatus(Enum) :\n",
"\n",
" UNREAD = 0\n",
" READ = 1\n",
diff --git a/solutions/object_oriented_design/online_chat/online_chat.py b/solutions/object_oriented_design/online_chat/online_chat.py
index 7063ca04..8af594fa 100644
--- a/solutions/object_oriented_design/online_chat/online_chat.py
+++ b/solutions/object_oriented_design/online_chat/online_chat.py
@@ -2,30 +2,30 @@ from abc import ABCMeta
from enum import Enum
-class UserService(object):
+class UserService(object) :
- def __init__(self):
+ def __init__(self) :
self.users_by_id = {} # key: user id, value: User
- def add_user(self, user_id, name, pass_hash):
+ def add_user(self, user_id, name, pass_hash) :
pass
- def remove_user(self, user_id):
+ def remove_user(self, user_id) :
pass
- def add_friend_request(self, from_user_id, to_user_id):
+ def add_friend_request(self, from_user_id, to_user_id) :
pass
- def approve_friend_request(self, from_user_id, to_user_id):
+ def approve_friend_request(self, from_user_id, to_user_id) :
pass
- def reject_friend_request(self, from_user_id, to_user_id):
+ def reject_friend_request(self, from_user_id, to_user_id) :
pass
-class User(object):
+class User(object) :
- def __init__(self, user_id, name, pass_hash):
+ def __init__(self, user_id, name, pass_hash) :
self.user_id = user_id
self.name = name
self.pass_hash = pass_hash
@@ -35,68 +35,68 @@ class User(object):
self.received_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
self.sent_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
- def message_user(self, friend_id, message):
+ def message_user(self, friend_id, message) :
pass
- def message_group(self, group_id, message):
+ def message_group(self, group_id, message) :
pass
- def send_friend_request(self, friend_id):
+ def send_friend_request(self, friend_id) :
pass
- def receive_friend_request(self, friend_id):
+ def receive_friend_request(self, friend_id) :
pass
- def approve_friend_request(self, friend_id):
+ def approve_friend_request(self, friend_id) :
pass
- def reject_friend_request(self, friend_id):
+ def reject_friend_request(self, friend_id) :
pass
-class Chat(metaclass=ABCMeta):
+class Chat(metaclass=ABCMeta) :
- def __init__(self, chat_id):
+ def __init__(self, chat_id) :
self.chat_id = chat_id
self.users = []
self.messages = []
-class PrivateChat(Chat):
+class PrivateChat(Chat) :
- def __init__(self, first_user, second_user):
- super(PrivateChat, self).__init__()
- self.users.append(first_user)
- self.users.append(second_user)
+ def __init__(self, first_user, second_user) :
+ super(PrivateChat, self) .__init__()
+ self.users.append(first_user)
+ self.users.append(second_user)
-class GroupChat(Chat):
+class GroupChat(Chat) :
- def add_user(self, user):
+ def add_user(self, user) :
pass
- def remove_user(self, user):
+ def remove_user(self, user) :
pass
-class Message(object):
+class Message(object) :
- def __init__(self, message_id, message, timestamp):
+ def __init__(self, message_id, message, timestamp) :
self.message_id = message_id
self.message = message
self.timestamp = timestamp
-class AddRequest(object):
+class AddRequest(object) :
- def __init__(self, from_user_id, to_user_id, request_status, timestamp):
+ def __init__(self, from_user_id, to_user_id, request_status, timestamp) :
self.from_user_id = from_user_id
self.to_user_id = to_user_id
self.request_status = request_status
self.timestamp = timestamp
-class RequestStatus(Enum):
+class RequestStatus(Enum) :
UNREAD = 0
READ = 1
diff --git a/solutions/object_oriented_design/parking_lot/parking_lot.ipynb b/solutions/object_oriented_design/parking_lot/parking_lot.ipynb
index 4613b79b..9c88b46c 100644
--- a/solutions/object_oriented_design/parking_lot/parking_lot.ipynb
+++ b/solutions/object_oriented_design/parking_lot/parking_lot.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
+ "This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
]
},
{
@@ -59,107 +59,107 @@
"from abc import ABCMeta, abstractmethod\n",
"\n",
"\n",
- "class VehicleSize(Enum):\n",
+ "class VehicleSize(Enum) :\n",
"\n",
" MOTORCYCLE = 0\n",
" COMPACT = 1\n",
" LARGE = 2\n",
"\n",
"\n",
- "class Vehicle(metaclass=ABCMeta):\n",
+ "class Vehicle(metaclass=ABCMeta) :\n",
"\n",
- " def __init__(self, vehicle_size, license_plate, spot_size):\n",
+ " def __init__(self, vehicle_size, license_plate, spot_size) :\n",
" self.vehicle_size = vehicle_size\n",
" self.license_plate = license_plate\n",
" self.spot_size = spot_size\n",
" self.spots_taken = []\n",
"\n",
- " def clear_spots(self):\n",
+ " def clear_spots(self) :\n",
" for spot in self.spots_taken:\n",
- " spot.remove_vehicle(self)\n",
+ " spot.remove_vehicle(self) \n",
" self.spots_taken = []\n",
"\n",
- " def take_spot(self, spot):\n",
- " self.spots_taken.append(spot)\n",
+ " def take_spot(self, spot) :\n",
+ " self.spots_taken.append(spot) \n",
"\n",
" @abstractmethod\n",
- " def can_fit_in_spot(self, spot):\n",
+ " def can_fit_in_spot(self, spot) :\n",
" pass\n",
"\n",
"\n",
- "class Motorcycle(Vehicle):\n",
+ "class Motorcycle(Vehicle) :\n",
"\n",
- " def __init__(self, license_plate):\n",
- " super(Motorcycle, self).__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)\n",
+ " def __init__(self, license_plate) :\n",
+ " super(Motorcycle, self) .__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1) \n",
"\n",
- " def can_fit_in_spot(self, spot):\n",
+ " def can_fit_in_spot(self, spot) :\n",
" return True\n",
"\n",
"\n",
- "class Car(Vehicle):\n",
+ "class Car(Vehicle) :\n",
"\n",
- " def __init__(self, license_plate):\n",
- " super(Car, self).__init__(VehicleSize.COMPACT, license_plate, spot_size=1)\n",
+ " def __init__(self, license_plate) :\n",
+ " super(Car, self) .__init__(VehicleSize.COMPACT, license_plate, spot_size=1) \n",
"\n",
- " def can_fit_in_spot(self, spot):\n",
+ " def can_fit_in_spot(self, spot) :\n",
" return True if (spot.size == LARGE or spot.size == COMPACT) else False\n",
"\n",
"\n",
- "class Bus(Vehicle):\n",
+ "class Bus(Vehicle) :\n",
"\n",
- " def __init__(self, license_plate):\n",
- " super(Bus, self).__init__(VehicleSize.LARGE, license_plate, spot_size=5)\n",
+ " def __init__(self, license_plate) :\n",
+ " super(Bus, self) .__init__(VehicleSize.LARGE, license_plate, spot_size=5) \n",
"\n",
- " def can_fit_in_spot(self, spot):\n",
+ " def can_fit_in_spot(self, spot) :\n",
" return True if spot.size == LARGE else False\n",
"\n",
"\n",
- "class ParkingLot(object):\n",
+ "class ParkingLot(object) :\n",
"\n",
- " def __init__(self, num_levels):\n",
+ " def __init__(self, num_levels) :\n",
" self.num_levels = num_levels\n",
" self.levels = []\n",
"\n",
- " def park_vehicle(self, vehicle):\n",
+ " def park_vehicle(self, vehicle) :\n",
" for level in levels:\n",
- " if level.park_vehicle(vehicle):\n",
+ " if level.park_vehicle(vehicle) :\n",
" return True\n",
" return False\n",
"\n",
"\n",
- "class Level(object):\n",
+ "class Level(object) :\n",
"\n",
" SPOTS_PER_ROW = 10\n",
"\n",
- " def __init__(self, floor, total_spots):\n",
+ " def __init__(self, floor, total_spots) :\n",
" self.floor = floor\n",
" self.num_spots = total_spots\n",
" self.available_spots = 0\n",
" self.parking_spots = []\n",
"\n",
- " def spot_freed(self):\n",
+ " def spot_freed(self) :\n",
" self.available_spots += 1\n",
"\n",
- " def park_vehicle(self, vehicle):\n",
- " spot = self._find_available_spot(vehicle)\n",
+ " def park_vehicle(self, vehicle) :\n",
+ " spot = self._find_available_spot(vehicle) \n",
" if spot is None:\n",
" return None\n",
" else:\n",
- " spot.park_vehicle(vehicle)\n",
+ " spot.park_vehicle(vehicle) \n",
" return spot\n",
"\n",
- " def _find_available_spot(self, vehicle):\n",
+ " def _find_available_spot(self, vehicle) :\n",
" \"\"\"Find an available spot where vehicle can fit, or return None\"\"\"\n",
" # ...\n",
"\n",
- " def _park_starting_at_spot(self, spot, vehicle):\n",
+ " def _park_starting_at_spot(self, spot, vehicle) :\n",
" \"\"\"Occupy starting at spot.spot_number to vehicle.spot_size.\"\"\"\n",
" # ...\n",
"\n",
"\n",
- "class ParkingSpot(object):\n",
+ "class ParkingSpot(object) :\n",
"\n",
- " def __init__(self, level, row, spot_number, spot_size, vehicle_size):\n",
+ " def __init__(self, level, row, spot_number, spot_size, vehicle_size) :\n",
" self.level = level\n",
" self.row = row\n",
" self.spot_number = spot_number\n",
@@ -167,16 +167,16 @@
" self.vehicle_size = vehicle_size\n",
" self.vehicle = None\n",
"\n",
- " def is_available(self):\n",
+ " def is_available(self) :\n",
" return True if self.vehicle is None else False\n",
"\n",
- " def can_fit_vehicle(self, vehicle):\n",
+ " def can_fit_vehicle(self, vehicle) :\n",
" if self.vehicle is not None:\n",
" return False\n",
- " return vehicle.can_fit_in_spot(self)\n",
+ " return vehicle.can_fit_in_spot(self) \n",
"\n",
- " def park_vehicle(self, vehicle): # ...\n",
- " def remove_vehicle(self): # ..."
+ " def park_vehicle(self, vehicle) : # ...\n",
+ " def remove_vehicle(self) : # ..."
]
}
],
diff --git a/solutions/object_oriented_design/parking_lot/parking_lot.py b/solutions/object_oriented_design/parking_lot/parking_lot.py
index 08852d9d..5c24b9ea 100644
--- a/solutions/object_oriented_design/parking_lot/parking_lot.py
+++ b/solutions/object_oriented_design/parking_lot/parking_lot.py
@@ -2,107 +2,107 @@ from abc import ABCMeta, abstractmethod
from enum import Enum
-class VehicleSize(Enum):
+class VehicleSize(Enum) :
MOTORCYCLE = 0
COMPACT = 1
LARGE = 2
-class Vehicle(metaclass=ABCMeta):
+class Vehicle(metaclass=ABCMeta) :
- def __init__(self, vehicle_size, license_plate, spot_size):
+ def __init__(self, vehicle_size, license_plate, spot_size) :
self.vehicle_size = vehicle_size
self.license_plate = license_plate
self.spot_size
self.spots_taken = []
- def clear_spots(self):
+ def clear_spots(self) :
for spot in self.spots_taken:
- spot.remove_vehicle(self)
+ spot.remove_vehicle(self)
self.spots_taken = []
- def take_spot(self, spot):
- self.spots_taken.append(spot)
+ def take_spot(self, spot) :
+ self.spots_taken.append(spot)
@abstractmethod
- def can_fit_in_spot(self, spot):
+ def can_fit_in_spot(self, spot) :
pass
-class Motorcycle(Vehicle):
+class Motorcycle(Vehicle) :
- def __init__(self, license_plate):
- super(Motorcycle, self).__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)
+ def __init__(self, license_plate) :
+ super(Motorcycle, self) .__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)
- def can_fit_in_spot(self, spot):
+ def can_fit_in_spot(self, spot) :
return True
-class Car(Vehicle):
+class Car(Vehicle) :
- def __init__(self, license_plate):
- super(Car, self).__init__(VehicleSize.COMPACT, license_plate, spot_size=1)
+ def __init__(self, license_plate) :
+ super(Car, self) .__init__(VehicleSize.COMPACT, license_plate, spot_size=1)
- def can_fit_in_spot(self, spot):
- return spot.size in (VehicleSize.LARGE, VehicleSize.COMPACT)
+ def can_fit_in_spot(self, spot) :
+ return spot.size in (VehicleSize.LARGE, VehicleSize.COMPACT)
-class Bus(Vehicle):
+class Bus(Vehicle) :
- def __init__(self, license_plate):
- super(Bus, self).__init__(VehicleSize.LARGE, license_plate, spot_size=5)
+ def __init__(self, license_plate) :
+ super(Bus, self) .__init__(VehicleSize.LARGE, license_plate, spot_size=5)
- def can_fit_in_spot(self, spot):
+ def can_fit_in_spot(self, spot) :
return spot.size == VehicleSize.LARGE
-class ParkingLot(object):
+class ParkingLot(object) :
- def __init__(self, num_levels):
+ def __init__(self, num_levels) :
self.num_levels = num_levels
self.levels = [] # List of Levels
- def park_vehicle(self, vehicle):
+ def park_vehicle(self, vehicle) :
for level in self.levels:
- if level.park_vehicle(vehicle):
+ if level.park_vehicle(vehicle) :
return True
return False
-class Level(object):
+class Level(object) :
SPOTS_PER_ROW = 10
- def __init__(self, floor, total_spots):
+ def __init__(self, floor, total_spots) :
self.floor = floor
self.num_spots = total_spots
self.available_spots = 0
self.spots = [] # List of ParkingSpots
- def spot_freed(self):
+ def spot_freed(self) :
self.available_spots += 1
- def park_vehicle(self, vehicle):
- spot = self._find_available_spot(vehicle)
+ def park_vehicle(self, vehicle) :
+ spot = self._find_available_spot(vehicle)
if spot is None:
return None
else:
- spot.park_vehicle(vehicle)
+ spot.park_vehicle(vehicle)
return spot
- def _find_available_spot(self, vehicle):
+ def _find_available_spot(self, vehicle) :
"""Find an available spot where vehicle can fit, or return None"""
pass
- def _park_starting_at_spot(self, spot, vehicle):
+ def _park_starting_at_spot(self, spot, vehicle) :
"""Occupy starting at spot.spot_number to vehicle.spot_size."""
pass
-class ParkingSpot(object):
+class ParkingSpot(object) :
- def __init__(self, level, row, spot_number, spot_size, vehicle_size):
+ def __init__(self, level, row, spot_number, spot_size, vehicle_size) :
self.level = level
self.row = row
self.spot_number = spot_number
@@ -110,16 +110,16 @@ class ParkingSpot(object):
self.vehicle_size = vehicle_size
self.vehicle = None
- def is_available(self):
+ def is_available(self) :
return True if self.vehicle is None else False
- def can_fit_vehicle(self, vehicle):
+ def can_fit_vehicle(self, vehicle) :
if self.vehicle is not None:
return False
- return vehicle.can_fit_in_spot(self)
+ return vehicle.can_fit_in_spot(self)
- def park_vehicle(self, vehicle):
+ def park_vehicle(self, vehicle) :
pass
- def remove_vehicle(self):
+ def remove_vehicle(self) :
pass
diff --git a/solutions/system_design/mint/README-zh-Hans.md b/solutions/system_design/mint/README-zh-Hans.md
index 58467bc6..c01f98f9 100644
--- a/solutions/system_design/mint/README-zh-Hans.md
+++ b/solutions/system_design/mint/README-zh-Hans.md
@@ -1,6 +1,6 @@
# 设计 Mint.com
-**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题索引)中的有关部分,以避免重复的内容。您可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
+**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题索引) 中的有关部分,以避免重复的内容。您可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@@ -80,7 +80,7 @@
> 列出所有重要组件以规划概要设计。
-
+
## 第三步:设计核心组件
@@ -88,9 +88,9 @@
### 用例:用户连接到一个财务账户
-我们可以将 1000 万用户的信息存储在一个[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)中。我们应该讨论一下[选择SQL或NoSQL之间的用例和权衡](https://github.com/donnemartin/system-design-primer#sql-or-nosql)了。
+我们可以将 1000 万用户的信息存储在一个[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 中。我们应该讨论一下[选择SQL或NoSQL之间的用例和权衡](https://github.com/donnemartin/system-design-primer#sql-or-nosql) 了。
-* **客户端** 作为一个[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server),发送请求到 **Web 服务器**
+* **客户端** 作为一个[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) ,发送请求到 **Web 服务器**
* **Web 服务器** 转发请求到 **账户API** 服务器
* **账户API** 服务器将新输入的账户信息更新到 **SQL数据库** 的`accounts`表
@@ -106,13 +106,13 @@ account_url varchar(255) NOT NULL
account_login varchar(32) NOT NULL
account_password_hash char(64) NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
-我们将在`id`,`user_id`和`created_at`等字段上创建一个[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)以加速查找(对数时间而不是扫描整个表)并保持数据在内存中。从内存中顺序读取 1 MB数据花费大约250毫秒,而从SSD读取是其4倍,从磁盘读取是其80倍。1
+我们将在`id`,`user_id`和`created_at`等字段上创建一个[索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 以加速查找(对数时间而不是扫描整个表)并保持数据在内存中。从内存中顺序读取 1 MB数据花费大约250毫秒,而从SSD读取是其4倍,从磁盘读取是其80倍。1
-我们将使用公开的[**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+我们将使用公开的[**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
@@ -120,7 +120,7 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
https://mint.com/api/v1/account
```
-对于内部通信,我们可以使用[远程过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)。
+对于内部通信,我们可以使用[远程过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) 。
接下来,服务从账户中提取交易。
@@ -136,8 +136,8 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
* **客户端**向 **Web服务器** 发送请求
* **Web服务器** 将请求转发到 **帐户API** 服务器
-* **帐户API** 服务器将job放在 **队列** 中,如 [Amazon SQS](https://aws.amazon.com/sqs/) 或者 [RabbitMQ](https://www.rabbitmq.com/)
- * 提取交易可能需要一段时间,我们可能希望[与队列异步](https://github.com/donnemartin/system-design-primer#asynchronism)地来做,虽然这会引入额外的复杂度。
+* **帐户API** 服务器将job放在 **队列** 中,如 [Amazon SQS](https://aws.amazon.com/sqs/) 或者 [RabbitMQ](https://www.rabbitmq.com/)
+ * 提取交易可能需要一段时间,我们可能希望[与队列异步](https://github.com/donnemartin/system-design-primer#asynchronism) 地来做,虽然这会引入额外的复杂度。
* **交易提取服务** 执行如下操作:
* 从 **Queue** 中拉取并从金融机构中提取给定用户的交易,将结果作为原始日志文件存储在 **对象存储区**。
* 使用 **分类服务** 来分类每个交易
@@ -156,25 +156,25 @@ created_at datetime NOT NULL
seller varchar(32) NOT NULL
amount decimal NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
-我们将在 `id`,`user_id`,和 `created_at`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)。
+我们将在 `id`,`user_id`,和 `created_at`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 。
`monthly_spending`表应该具有如下结构:
```
id int NOT NULL AUTO_INCREMENT
month_year date NOT NULL
-category varchar(32)
+category varchar(32)
amount decimal NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
-我们将在`id`,`user_id`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)。
+我们将在`id`,`user_id`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 。
#### 分类服务
@@ -183,7 +183,7 @@ FOREIGN KEY(user_id) REFERENCES users(id)
**告知你的面试官你准备写多少代码**。
```python
-class DefaultCategories(Enum):
+class DefaultCategories(Enum) :
HOUSING = 0
FOOD = 1
@@ -200,19 +200,19 @@ seller_category_map['Target'] = DefaultCategories.SHOPPING
对于一开始没有在映射中的卖家,我们可以通过评估用户提供的手动类别来进行众包。在 O(1) 时间内,我们可以用堆来快速查找每个卖家的顶端的手动覆盖。
```python
-class Categorizer(object):
+class Categorizer(object) :
- def __init__(self, seller_category_map, self.seller_category_crowd_overrides_map):
+ def __init__(self, seller_category_map, self.seller_category_crowd_overrides_map) :
self.seller_category_map = seller_category_map
self.seller_category_crowd_overrides_map = \
seller_category_crowd_overrides_map
- def categorize(self, transaction):
+ def categorize(self, transaction) :
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
elif transaction.seller in self.seller_category_crowd_overrides_map:
self.seller_category_map[transaction.seller] = \
- self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
+ self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
```
@@ -220,9 +220,9 @@ class Categorizer(object):
交易实现:
```python
-class Transaction(object):
+class Transaction(object) :
- def __init__(self, created_at, seller, amount):
+ def __init__(self, created_at, seller, amount) :
self.timestamp = timestamp
self.seller = seller
self.amount = amount
@@ -234,13 +234,13 @@ class Transaction(object):
`TABLE budget_overrides`中存储此覆盖。
```python
-class Budget(object):
+class Budget(object) :
- def __init__(self, income):
+ def __init__(self, income) :
self.income = income
- self.categories_to_budget_map = self.create_budget_template()
+ self.categories_to_budget_map = self.create_budget_template()
- def create_budget_template(self):
+ def create_budget_template(self) :
return {
'DefaultCategories.HOUSING': income * .4,
'DefaultCategories.FOOD': income * .2
@@ -249,7 +249,7 @@ class Budget(object):
...
}
- def override_category_budget(self, category, amount):
+ def override_category_budget(self, category, amount) :
self.categories_to_budget_map[category] = amount
```
@@ -275,26 +275,26 @@ user_id timestamp seller amount
**MapReduce** 实现:
```python
-class SpendingByCategory(MRJob):
+class SpendingByCategory(MRJob) :
- def __init__(self, categorizer):
+ def __init__(self, categorizer) :
self.categorizer = categorizer
- self.current_year_month = calc_current_year_month()
+ self.current_year_month = calc_current_year_month()
...
- def calc_current_year_month(self):
+ def calc_current_year_month(self) :
"""返回当前年月"""
...
- def extract_year_month(self, timestamp):
+ def extract_year_month(self, timestamp) :
"""返回时间戳的年,月部分"""
...
- def handle_budget_notifications(self, key, total):
+ def handle_budget_notifications(self, key, total) :
"""如果接近或超出预算,调用通知API"""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""解析每个日志行,提取和转换相关行。
参数行应为如下形式:
@@ -303,31 +303,31 @@ class SpendingByCategory(MRJob):
使用分类器来将卖家转换成类别,生成如下形式的key-value对:
- (user_id, 2016-01, shopping), 25
- (user_id, 2016-01, shopping), 100
- (user_id, 2016-01, gas), 50
+ (user_id, 2016-01, shopping) , 25
+ (user_id, 2016-01, shopping) , 100
+ (user_id, 2016-01, gas) , 50
"""
- user_id, timestamp, seller, amount = line.split('\t')
- category = self.categorizer.categorize(seller)
- period = self.extract_year_month(timestamp)
+ user_id, timestamp, seller, amount = line.split('\t')
+ category = self.categorizer.categorize(seller)
+ period = self.extract_year_month(timestamp)
if period == self.current_year_month:
- yield (user_id, period, category), amount
+ yield (user_id, period, category) , amount
- def reducer(self, key, value):
+ def reducer(self, key, value) :
"""将每个key对应的值求和。
- (user_id, 2016-01, shopping), 125
- (user_id, 2016-01, gas), 50
+ (user_id, 2016-01, shopping) , 125
+ (user_id, 2016-01, gas) , 50
"""
- total = sum(values)
- yield key, sum(values)
+ total = sum(values)
+ yield key, sum(values)
```
## 第四步:设计扩展
> 根据限制条件,找到并解决瓶颈。
-
+
**重要提示:不要从最初设计直接跳到最终设计中!**
@@ -337,20 +337,20 @@ class SpendingByCategory(MRJob):
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
-**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
+**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
-* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
-* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [异步](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#异步)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
+* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
+* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [异步](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#异步)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
我们将增加一个额外的用例:**用户** 访问摘要和交易数据。
@@ -366,7 +366,7 @@ class SpendingByCategory(MRJob):
* 如果URL在 **SQL 数据库**中,获取该内容
* 以其内容更新 **内存缓存**
-参考 [何时更新缓存](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) 中权衡和替代的内容。以上方法描述了 [cache-aside缓存模式](https://github.com/donnemartin/system-design-primer#cache-aside).
+参考 [何时更新缓存](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) 中权衡和替代的内容。以上方法描述了 [cache-aside缓存模式](https://github.com/donnemartin/system-design-primer#cache-aside) .
我们可以使用诸如 Amazon Redshift 或者 Google BigQuery 等数据仓库解决方案,而不是将`monthly_spending`聚合表保留在 **SQL 数据库** 中。
@@ -376,10 +376,10 @@ class SpendingByCategory(MRJob):
*平均* 200 次交易写入每秒(峰值时更高)对于单个 **SQL 写入主-从服务** 来说可能是棘手的。我们可能需要考虑其它的 SQL 性能拓展技术:
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**。
@@ -389,50 +389,50 @@ class SpendingByCategory(MRJob):
#### NoSQL
-* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
- * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
-请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
+请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
### 延迟数值
-请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续探讨
diff --git a/solutions/system_design/mint/README.md b/solutions/system_design/mint/README.md
index 1ec31674..fcc60b30 100644
--- a/solutions/system_design/mint/README.md
+++ b/solutions/system_design/mint/README.md
@@ -80,7 +80,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -88,9 +88,9 @@ Handy conversion guide:
### Use case: User connects to a financial account
-We could store info on the 10 million users in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
+We could store info on the 10 million users in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
-* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Accounts API** server
* The **Accounts API** server updates the **SQL Database** `accounts` table with the newly entered account info
@@ -106,13 +106,13 @@ account_url varchar(255) NOT NULL
account_login varchar(32) NOT NULL
account_password_hash char(64) NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id`, `user_id `, and `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
@@ -120,7 +120,7 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
https://mint.com/api/v1/account
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
Next, the service extracts transactions from the account.
@@ -136,8 +136,8 @@ Data flow:
* The **Client** sends a request to the **Web Server**
* The **Web Server** forwards the request to the **Accounts API** server
-* The **Accounts API** server places a job on a **Queue** such as [Amazon SQS](https://aws.amazon.com/sqs/) or [RabbitMQ](https://www.rabbitmq.com/)
- * Extracting transactions could take awhile, we'd probably want to do this [asynchronously with a queue](https://github.com/donnemartin/system-design-primer#asynchronism), although this introduces additional complexity
+* The **Accounts API** server places a job on a **Queue** such as [Amazon SQS](https://aws.amazon.com/sqs/) or [RabbitMQ](https://www.rabbitmq.com/)
+ * Extracting transactions could take awhile, we'd probably want to do this [asynchronously with a queue](https://github.com/donnemartin/system-design-primer#asynchronism) , although this introduces additional complexity
* The **Transaction Extraction Service** does the following:
* Pulls from the **Queue** and extracts transactions for the given account from the financial institution, storing the results as raw log files in the **Object Store**
* Uses the **Category Service** to categorize each transaction
@@ -156,8 +156,8 @@ created_at datetime NOT NULL
seller varchar(32) NOT NULL
amount decimal NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id`, `user_id `, and `created_at`.
@@ -167,11 +167,11 @@ The `monthly_spending` table could have the following structure:
```
id int NOT NULL AUTO_INCREMENT
month_year date NOT NULL
-category varchar(32)
+category varchar(32)
amount decimal NOT NULL
user_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(user_id) REFERENCES users(id)
+PRIMARY KEY(id)
+FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id` and `user_id `.
@@ -183,7 +183,7 @@ For the **Category Service**, we can seed a seller-to-category dictionary with t
**Clarify with your interviewer how much code you are expected to write**.
```python
-class DefaultCategories(Enum):
+class DefaultCategories(Enum) :
HOUSING = 0
FOOD = 1
@@ -200,19 +200,19 @@ seller_category_map['Target'] = DefaultCategories.SHOPPING
For sellers not initially seeded in the map, we could use a crowdsourcing effort by evaluating the manual category overrides our users provide. We could use a heap to quickly lookup the top manual override per seller in O(1) time.
```python
-class Categorizer(object):
+class Categorizer(object) :
- def __init__(self, seller_category_map, seller_category_crowd_overrides_map):
+ def __init__(self, seller_category_map, seller_category_crowd_overrides_map) :
self.seller_category_map = seller_category_map
self.seller_category_crowd_overrides_map = \
seller_category_crowd_overrides_map
- def categorize(self, transaction):
+ def categorize(self, transaction) :
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
elif transaction.seller in self.seller_category_crowd_overrides_map:
self.seller_category_map[transaction.seller] = \
- self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
+ self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
```
@@ -220,9 +220,9 @@ class Categorizer(object):
Transaction implementation:
```python
-class Transaction(object):
+class Transaction(object) :
- def __init__(self, created_at, seller, amount):
+ def __init__(self, created_at, seller, amount) :
self.created_at = created_at
self.seller = seller
self.amount = amount
@@ -233,13 +233,13 @@ class Transaction(object):
To start, we could use a generic budget template that allocates category amounts based on income tiers. Using this approach, we would not have to store the 100 million budget items identified in the constraints, only those that the user overrides. If a user overrides a budget category, which we could store the override in the `TABLE budget_overrides`.
```python
-class Budget(object):
+class Budget(object) :
- def __init__(self, income):
+ def __init__(self, income) :
self.income = income
- self.categories_to_budget_map = self.create_budget_template()
+ self.categories_to_budget_map = self.create_budget_template()
- def create_budget_template(self):
+ def create_budget_template(self) :
return {
DefaultCategories.HOUSING: self.income * .4,
DefaultCategories.FOOD: self.income * .2,
@@ -248,7 +248,7 @@ class Budget(object):
...
}
- def override_category_budget(self, category, amount):
+ def override_category_budget(self, category, amount) :
self.categories_to_budget_map[category] = amount
```
@@ -274,26 +274,26 @@ user_id timestamp seller amount
**MapReduce** implementation:
```python
-class SpendingByCategory(MRJob):
+class SpendingByCategory(MRJob) :
- def __init__(self, categorizer):
+ def __init__(self, categorizer) :
self.categorizer = categorizer
- self.current_year_month = calc_current_year_month()
+ self.current_year_month = calc_current_year_month()
...
- def calc_current_year_month(self):
+ def calc_current_year_month(self) :
"""Return the current year and month."""
...
- def extract_year_month(self, timestamp):
+ def extract_year_month(self, timestamp) :
"""Return the year and month portions of the timestamp."""
...
- def handle_budget_notifications(self, key, total):
+ def handle_budget_notifications(self, key, total) :
"""Call notification API if nearing or exceeded budget."""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Argument line will be of the form:
@@ -303,31 +303,31 @@ class SpendingByCategory(MRJob):
Using the categorizer to convert seller to category,
emit key value pairs of the form:
- (user_id, 2016-01, shopping), 25
- (user_id, 2016-01, shopping), 100
- (user_id, 2016-01, gas), 50
+ (user_id, 2016-01, shopping) , 25
+ (user_id, 2016-01, shopping) , 100
+ (user_id, 2016-01, gas) , 50
"""
- user_id, timestamp, seller, amount = line.split('\t')
- category = self.categorizer.categorize(seller)
- period = self.extract_year_month(timestamp)
+ user_id, timestamp, seller, amount = line.split('\t')
+ category = self.categorizer.categorize(seller)
+ period = self.extract_year_month(timestamp)
if period == self.current_year_month:
- yield (user_id, period, category), amount
+ yield (user_id, period, category) , amount
- def reducer(self, key, value):
+ def reducer(self, key, value) :
"""Sum values for each key.
- (user_id, 2016-01, shopping), 125
- (user_id, 2016-01, gas), 50
+ (user_id, 2016-01, shopping) , 125
+ (user_id, 2016-01, gas) , 50
"""
- total = sum(values)
- yield key, sum(values)
+ total = sum(values)
+ yield key, sum(values)
```
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -339,19 +339,19 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
-* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
-* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
+* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
We'll add an additional use case: **User** accesses summaries and transactions.
@@ -367,20 +367,20 @@ User sessions, aggregate stats by category, and recent transactions could be pla
* If the url is in the **SQL Database**, fetches the contents
* Updates the **Memory Cache** with the contents
-Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside).
+Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside) .
Instead of keeping the `monthly_spending` aggregate table in the **SQL Database**, we could create a separate **Analytics Database** using a data warehousing solution such as Amazon Redshift or Google BigQuery.
We might only want to store a month of `transactions` data in the database, while storing the rest in a data warehouse or in an **Object Store**. An **Object Store** such as Amazon S3 can comfortably handle the constraint of 250 GB of new content per month.
-To address the 200 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
+To address the 200 *average* read requests per second (higher at peak) , traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
2,000 *average* transaction writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**. We might need to employ additional SQL scaling patterns:
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@@ -390,50 +390,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/mint/mint_mapreduce.py b/solutions/system_design/mint/mint_mapreduce.py
index e3554243..603b70f1 100644
--- a/solutions/system_design/mint/mint_mapreduce.py
+++ b/solutions/system_design/mint/mint_mapreduce.py
@@ -3,55 +3,55 @@
from mrjob.job import MRJob
-class SpendingByCategory(MRJob):
+class SpendingByCategory(MRJob) :
- def __init__(self, categorizer):
+ def __init__(self, categorizer) :
self.categorizer = categorizer
...
- def current_year_month(self):
+ def current_year_month(self) :
"""Return the current year and month."""
...
- def extract_year_month(self, timestamp):
+ def extract_year_month(self, timestamp) :
"""Return the year and month portions of the timestamp."""
...
- def handle_budget_notifications(self, key, total):
+ def handle_budget_notifications(self, key, total) :
"""Call notification API if nearing or exceeded budget."""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (2016-01, shopping), 25
- (2016-01, shopping), 100
- (2016-01, gas), 50
+ (2016-01, shopping) , 25
+ (2016-01, shopping) , 100
+ (2016-01, gas) , 50
"""
- timestamp, category, amount = line.split('\t')
- period = self. extract_year_month(timestamp)
- if period == self.current_year_month():
- yield (period, category), amount
+ timestamp, category, amount = line.split('\t')
+ period = self. extract_year_month(timestamp)
+ if period == self.current_year_month() :
+ yield (period, category) , amount
- def reducer(self, key, values):
+ def reducer(self, key, values) :
"""Sum values for each key.
- (2016-01, shopping), 125
- (2016-01, gas), 50
+ (2016-01, shopping) , 125
+ (2016-01, gas) , 50
"""
- total = sum(values)
- self.handle_budget_notifications(key, total)
- yield key, sum(values)
+ total = sum(values)
+ self.handle_budget_notifications(key, total)
+ yield key, sum(values)
- def steps(self):
+ def steps(self) :
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer)
+ reducer=self.reducer)
]
if __name__ == '__main__':
- SpendingByCategory.run()
+ SpendingByCategory.run()
diff --git a/solutions/system_design/mint/mint_snippets.py b/solutions/system_design/mint/mint_snippets.py
index cc5d228b..1bb86e0b 100644
--- a/solutions/system_design/mint/mint_snippets.py
+++ b/solutions/system_design/mint/mint_snippets.py
@@ -3,7 +3,7 @@
from enum import Enum
-class DefaultCategories(Enum):
+class DefaultCategories(Enum) :
HOUSING = 0
FOOD = 1
@@ -17,34 +17,34 @@ seller_category_map['Exxon'] = DefaultCategories.GAS
seller_category_map['Target'] = DefaultCategories.SHOPPING
-class Categorizer(object):
+class Categorizer(object) :
- def __init__(self, seller_category_map, seller_category_overrides_map):
+ def __init__(self, seller_category_map, seller_category_overrides_map) :
self.seller_category_map = seller_category_map
self.seller_category_overrides_map = seller_category_overrides_map
- def categorize(self, transaction):
+ def categorize(self, transaction) :
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
if transaction.seller in self.seller_category_overrides_map:
seller_category_map[transaction.seller] = \
- self.manual_overrides[transaction.seller].peek_min()
+ self.manual_overrides[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
-class Transaction(object):
+class Transaction(object) :
- def __init__(self, timestamp, seller, amount):
+ def __init__(self, timestamp, seller, amount) :
self.timestamp = timestamp
self.seller = seller
self.amount = amount
-class Budget(object):
+class Budget(object) :
- def __init__(self, template_categories_to_budget_map):
+ def __init__(self, template_categories_to_budget_map) :
self.categories_to_budget_map = template_categories_to_budget_map
- def override_category_budget(self, category, amount):
+ def override_category_budget(self, category, amount) :
self.categories_to_budget_map[category] = amount
diff --git a/solutions/system_design/pastebin/README-zh-Hans.md b/solutions/system_design/pastebin/README-zh-Hans.md
index d2946e97..6884aba5 100644
--- a/solutions/system_design/pastebin/README-zh-Hans.md
+++ b/solutions/system_design/pastebin/README-zh-Hans.md
@@ -1,6 +1,6 @@
-# 设计 Pastebin.com (或者 Bit.ly)
+# 设计 Pastebin.com (或者 Bit.ly)
-**注意: 为了避免重复,当前文档会直接链接到[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)的相关区域,请参考链接内容以获得综合的讨论点、权衡和替代方案。**
+**注意: 为了避免重复,当前文档会直接链接到[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 的相关区域,请参考链接内容以获得综合的讨论点、权衡和替代方案。**
**设计 Bit.ly** - 是一个类似的问题,区别是 pastebin 需要存储的是 paste 的内容,而不是原始的未短化的 url。
@@ -61,7 +61,7 @@
* `paste_path` - 255 bytes
* 总共 = ~1.27 KB
* 每个月新的 paste 内容在 12.7GB
- * (1.27 * 10000000)KB / 月的 paste
+ * (1.27 * 10000000) KB / 月的 paste
* 三年内将近 450GB 的新 paste 内容
* 三年内 3.6 亿短链接
* 假设大部分都是新的 paste,而不是需要更新已存在的 paste
@@ -79,7 +79,7 @@
> 概述一个包括所有重要的组件的高层次设计
-
+
## 第三步:设计核心组件
@@ -87,13 +87,13 @@
### 用例:用户输入一段文本,然后得到一个随机生成的链接
-我们可以用一个 [关系型数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)作为一个大的哈希表,用来把生成的 url 映射到一个包含 paste 文件的文件服务器和路径上。
+我们可以用一个 [关系型数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms) 作为一个大的哈希表,用来把生成的 url 映射到一个包含 paste 文件的文件服务器和路径上。
-为了避免托管一个文件服务器,我们可以用一个托管的**对象存储**,比如 Amazon 的 S3 或者[NoSQL 文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)。
+为了避免托管一个文件服务器,我们可以用一个托管的**对象存储**,比如 Amazon 的 S3 或者[NoSQL 文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储) 。
-作为一个大的哈希表的关系型数据库的替代方案,我们可以用[NoSQL 键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)。我们需要讨论[选择 SQL 或 NoSQL 之间的权衡](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。下面的讨论是使用关系型数据库方法。
+作为一个大的哈希表的关系型数据库的替代方案,我们可以用[NoSQL 键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储) 。我们需要讨论[选择 SQL 或 NoSQL 之间的权衡](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql) 。下面的讨论是使用关系型数据库方法。
-* **客户端** 发送一个创建 paste 的请求到作为一个[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)启动的 **Web 服务器**。
+* **客户端** 发送一个创建 paste 的请求到作为一个[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) 启动的 **Web 服务器**。
* **Web 服务器** 转发请求给 **写接口** 服务器
* **写接口** 服务器执行如下操作:
* 生成一个唯一的 url
@@ -113,10 +113,10 @@ shortlink char(7) NOT NULL
expiration_length_in_minutes int NOT NULL
created_at datetime NOT NULL
paste_path varchar(255) NOT NULL
-PRIMARY KEY(shortlink)
+PRIMARY KEY(shortlink)
```
-我们将在 `shortlink` 字段和 `created_at` 字段上创建一个[数据库索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#使用正确的索引),用来提高查询的速度(避免因为扫描全表导致的长时间查询)并将数据保存在内存中,从内存里面顺序读取 1MB 的数据需要大概 250 微秒,而从 SSD 上读取则需要花费 4 倍的时间,从硬盘上则需要花费 80 倍的时间。 1
+我们将在 `shortlink` 字段和 `created_at` 字段上创建一个[数据库索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#使用正确的索引) ,用来提高查询的速度(避免因为扫描全表导致的长时间查询)并将数据保存在内存中,从内存里面顺序读取 1MB 的数据需要大概 250 微秒,而从 SSD 上读取则需要花费 4 倍的时间,从硬盘上则需要花费 80 倍的时间。 1
为了生成唯一的 url,我们可以:
@@ -128,15 +128,15 @@ PRIMARY KEY(shortlink)
* 对于 urls,使用 Base 62 编码 `[a-zA-Z0-9]` 是比较合适的
* 对于每一个原始输入只会有一个 hash 结果,Base 62 是确定的(不涉及随机性)
* Base 64 是另外一个流行的编码方案,但是对于 urls,会因为额外的 `+` 和 `-` 字符串而产生一些问题
- * 以下 [Base 62 伪代码](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) 执行的时间复杂度是 O(k),k 是数字的数量 = 7:
+ * 以下 [Base 62 伪代码](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) 执行的时间复杂度是 O(k) ,k 是数字的数量 = 7:
```python
-def base_encode(num, base=62):
+def base_encode(num, base=62) :
digits = []
while num > 0
- remainder = modulo(num, base)
- digits.push(remainder)
- num = divide(num, base)
+ remainder = modulo(num, base)
+ digits.push(remainder)
+ num = divide(num, base)
digits = digits.reverse
```
@@ -146,7 +146,7 @@ def base_encode(num, base=62):
url = base_encode(md5(ip_address+timestamp))[:URL_LENGTH]
```
-我们将会用一个公开的 [**REST 风格接口**](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest):
+我们将会用一个公开的 [**REST 风格接口**](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest) :
```shell
$ curl -X POST --data '{"expiration_length_in_minutes":"60", \"paste_contents":"Hello World!"}' https://pastebin.com/api/v1/paste
@@ -160,7 +160,7 @@ Response:
}
```
-用于内部通信,我们可以用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
+用于内部通信,我们可以用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc) 。
### 用例:用户输入一个 paste 的 url 后可以看到它存储的内容
@@ -192,36 +192,36 @@ Response:
因为实时分析不是必须的,所以我们可以简单的 **MapReduce** **Web Server** 的日志,用来生成点击次数。
```python
-class HitCounts(MRJob):
+class HitCounts(MRJob) :
- def extract_url(self, line):
+ def extract_url(self, line) :
"""Extract the generated url from the log line."""
...
- def extract_year_month(self, line):
+ def extract_year_month(self, line) :
"""Return the year and month portions of the timestamp."""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (2016-01, url0), 1
- (2016-01, url0), 1
- (2016-01, url1), 1
+ (2016-01, url0) , 1
+ (2016-01, url0) , 1
+ (2016-01, url1) , 1
"""
- url = self.extract_url(line)
- period = self.extract_year_month(line)
- yield (period, url), 1
+ url = self.extract_url(line)
+ period = self.extract_year_month(line)
+ yield (period, url) , 1
- def reducer(self, key, values):
+ def reducer(self, key, values) :
"""Sum values for each key.
- (2016-01, url0), 2
- (2016-01, url1), 1
+ (2016-01, url0) , 2
+ (2016-01, url1) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
```
### 用例: 服务删除过期的 pastes
@@ -233,43 +233,43 @@ class HitCounts(MRJob):
> 给定约束条件,识别和解决瓶颈。
-
+
**重要提示: 不要简单的从最初的设计直接跳到最终的设计**
-说明您将迭代地执行这样的操作:1)**Benchmark/Load 测试**,2)**Profile** 出瓶颈,3)在评估替代方案和权衡时解决瓶颈,4)重复前面,可以参考[在 AWS 上设计一个可以支持百万用户的系统](../scaling_aws/README.md)这个用来解决如何迭代地扩展初始设计的例子。
+说明您将迭代地执行这样的操作:1) **Benchmark/Load 测试**,2) **Profile** 出瓶颈,3) 在评估替代方案和权衡时解决瓶颈,4) 重复前面,可以参考[在 AWS 上设计一个可以支持百万用户的系统](../scaling_aws/README.md) 这个用来解决如何迭代地扩展初始设计的例子。
重要的是讨论在初始设计中可能遇到的瓶颈,以及如何解决每个瓶颈。比如,在多个 **Web 服务器** 上添加 **负载平衡器** 可以解决哪些问题? **CDN** 解决哪些问题?**Master-Slave Replicas** 解决哪些问题? 替代方案是什么和怎么对每一个替代方案进行权衡比较?
我们将介绍一些组件来完成设计,并解决可伸缩性问题。内部的负载平衡器并不能减少杂乱。
-**为了避免重复的讨论**, 参考以下[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)获取主要讨论要点、权衡和替代方案:
+**为了避免重复的讨论**, 参考以下[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 获取主要讨论要点、权衡和替代方案:
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [CDN](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#内容分发网络cdn)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [应用层](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
-* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
-* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [CDN](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#内容分发网络cdn)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [应用层](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
+* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
+* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**分析存储数据库** 可以用比如 Amazon Redshift 或者 Google BigQuery 这样的数据仓库解决方案。
一个像 Amazon S3 这样的 **对象存储**,可以轻松处理每月 12.7 GB 的新内容约束。
-要处理 *平均* 每秒 40 读请求(峰值更高),其中热点内容的流量应该由 **内存缓存** 处理,而不是数据库。**内存缓存** 对于处理分布不均匀的流量和流量峰值也很有用。只要副本没有陷入复制写的泥潭,**SQL Read Replicas** 应该能够处理缓存丢失。
+要处理 *平均* 每秒 40 读请求(峰值更高) ,其中热点内容的流量应该由 **内存缓存** 处理,而不是数据库。**内存缓存** 对于处理分布不均匀的流量和流量峰值也很有用。只要副本没有陷入复制写的泥潭,**SQL Read Replicas** 应该能够处理缓存丢失。
对于单个 **SQL Write Master-Slave**,*平均* 每秒 4paste 写入 (峰值更高) 应该是可以做到的。否则,我们需要使用额外的 SQL 扩展模式:
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#SQL调优)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#SQL调优)
我们还应该考虑将一些数据移动到 **NoSQL 数据库**。
@@ -279,50 +279,50 @@ class HitCounts(MRJob):
### NoSQL
-* [键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [sql 还是 nosql](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [sql 还是 nosql](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 缓存什么
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步和微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 讨论权衡:
- * 跟客户端之间的外部通信 - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 跟客户端之间的外部通信 - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全
-参考[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)。
+参考[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 。
### 延迟数字
-见[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+见[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续进行
diff --git a/solutions/system_design/pastebin/README.md b/solutions/system_design/pastebin/README.md
index 2d87ddcc..09325580 100644
--- a/solutions/system_design/pastebin/README.md
+++ b/solutions/system_design/pastebin/README.md
@@ -1,4 +1,4 @@
-# Design Pastebin.com (or Bit.ly)
+# Design Pastebin.com (or Bit.ly)
*Note: This document links directly to relevant areas found in the [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) to avoid duplication. Refer to the linked content for general talking points, tradeoffs, and alternatives.*
@@ -79,7 +79,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -89,17 +89,17 @@ Handy conversion guide:
We could use a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) as a large hash table, mapping the generated url to a file server and path containing the paste file.
-Instead of managing a file server, we could use a managed **Object Store** such as Amazon S3 or a [NoSQL document store](https://github.com/donnemartin/system-design-primer#document-store).
+Instead of managing a file server, we could use a managed **Object Store** such as Amazon S3 or a [NoSQL document store](https://github.com/donnemartin/system-design-primer#document-store) .
-An alternative to a relational database acting as a large hash table, we could use a [NoSQL key-value store](https://github.com/donnemartin/system-design-primer#key-value-store). We should discuss the [tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql). The following discussion uses the relational database approach.
+An alternative to a relational database acting as a large hash table, we could use a [NoSQL key-value store](https://github.com/donnemartin/system-design-primer#key-value-store) . We should discuss the [tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) . The following discussion uses the relational database approach.
-* The **Client** sends a create paste request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a create paste request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Write API** server
* The **Write API** server does the following:
* Generates a unique url
* Checks if the url is unique by looking at the **SQL Database** for a duplicate
* If the url is not unique, it generates another url
- * If we supported a custom url, we could use the user-supplied (also check for a duplicate)
+ * If we supported a custom url, we could use the user-supplied (also check for a duplicate)
* Saves to the **SQL Database** `pastes` table
* Saves the paste data to the **Object Store**
* Returns the url
@@ -113,7 +113,7 @@ shortlink char(7) NOT NULL
expiration_length_in_minutes int NOT NULL
created_at datetime NOT NULL
paste_path varchar(255) NOT NULL
-PRIMARY KEY(shortlink)
+PRIMARY KEY(shortlink)
```
Setting the primary key to be based on the `shortlink` column creates an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) that the database uses to enforce uniqueness. We'll create an additional index on `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
@@ -126,17 +126,17 @@ To generate the unique url, we could:
* Alternatively, we could also take the MD5 hash of randomly-generated data
* [**Base 62**](https://www.kerstner.at/2012/07/shortening-strings-using-base-62-encoding/) encode the MD5 hash
* Base 62 encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
- * There is only one hash result for the original input and Base 62 is deterministic (no randomness involved)
+ * There is only one hash result for the original input and Base 62 is deterministic (no randomness involved)
* Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
* The following [Base 62 pseudocode](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) runs in O(k) time where k is the number of digits = 7:
```python
-def base_encode(num, base=62):
+def base_encode(num, base=62) :
digits = []
while num > 0
- remainder = modulo(num, base)
- digits.push(remainder)
- num = divide(num, base)
+ remainder = modulo(num, base)
+ digits.push(remainder)
+ num = divide(num, base)
digits = digits.reverse
```
@@ -146,7 +146,7 @@ def base_encode(num, base=62):
url = base_encode(md5(ip_address+timestamp))[:URL_LENGTH]
```
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl -X POST --data '{ "expiration_length_in_minutes": "60", \
@@ -161,7 +161,7 @@ Response:
}
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
### Use case: User enters a paste's url and views the contents
@@ -195,36 +195,36 @@ Since realtime analytics are not a requirement, we could simply **MapReduce** th
**Clarify with your interviewer how much code you are expected to write**.
```python
-class HitCounts(MRJob):
+class HitCounts(MRJob) :
- def extract_url(self, line):
+ def extract_url(self, line) :
"""Extract the generated url from the log line."""
...
- def extract_year_month(self, line):
+ def extract_year_month(self, line) :
"""Return the year and month portions of the timestamp."""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (2016-01, url0), 1
- (2016-01, url0), 1
- (2016-01, url1), 1
+ (2016-01, url0) , 1
+ (2016-01, url0) , 1
+ (2016-01, url1) , 1
"""
- url = self.extract_url(line)
- period = self.extract_year_month(line)
- yield (period, url), 1
+ url = self.extract_url(line)
+ period = self.extract_year_month(line)
+ yield (period, url) , 1
- def reducer(self, key, values):
+ def reducer(self, key, values) :
"""Sum values for each key.
- (2016-01, url0), 2
- (2016-01, url1), 1
+ (2016-01, url0) , 2
+ (2016-01, url1) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
```
### Use case: Service deletes expired pastes
@@ -235,7 +235,7 @@ To delete expired pastes, we could just scan the **SQL Database** for all entrie
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -247,31 +247,31 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
-* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
-* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
+* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Analytics Database** could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
An **Object Store** such as Amazon S3 can comfortably handle the constraint of 12.7 GB of new content per month.
-To address the 40 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
+To address the 40 *average* read requests per second (higher at peak) , traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
4 *average* paste writes per second (with higher at peak) should be do-able for a single **SQL Write Master-Slave**. Otherwise, we'll need to employ additional SQL scaling patterns:
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@@ -281,50 +281,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/pastebin/pastebin.py b/solutions/system_design/pastebin/pastebin.py
index 7e8d268a..c72a2e39 100644
--- a/solutions/system_design/pastebin/pastebin.py
+++ b/solutions/system_design/pastebin/pastebin.py
@@ -3,44 +3,44 @@
from mrjob.job import MRJob
-class HitCounts(MRJob):
+class HitCounts(MRJob) :
- def extract_url(self, line):
+ def extract_url(self, line) :
"""Extract the generated url from the log line."""
pass
- def extract_year_month(self, line):
+ def extract_year_month(self, line) :
"""Return the year and month portions of the timestamp."""
pass
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (2016-01, url0), 1
- (2016-01, url0), 1
- (2016-01, url1), 1
+ (2016-01, url0) , 1
+ (2016-01, url0) , 1
+ (2016-01, url1) , 1
"""
- url = self.extract_url(line)
- period = self.extract_year_month(line)
- yield (period, url), 1
+ url = self.extract_url(line)
+ period = self.extract_year_month(line)
+ yield (period, url) , 1
- def reducer(self, key, values):
+ def reducer(self, key, values) :
"""Sum values for each key.
- (2016-01, url0), 2
- (2016-01, url1), 1
+ (2016-01, url0) , 2
+ (2016-01, url1) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
- def steps(self):
+ def steps(self) :
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer)
+ reducer=self.reducer)
]
if __name__ == '__main__':
- HitCounts.run()
+ HitCounts.run()
diff --git a/solutions/system_design/query_cache/README-zh-Hans.md b/solutions/system_design/query_cache/README-zh-Hans.md
index c6f4be75..e2cad9de 100644
--- a/solutions/system_design/query_cache/README-zh-Hans.md
+++ b/solutions/system_design/query_cache/README-zh-Hans.md
@@ -1,6 +1,6 @@
# 设计一个键-值缓存来存储最近 web 服务查询的结果
-**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
+**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@@ -58,7 +58,7 @@
> 列出所有重要组件以规划概要设计。
-
+
## 第三步:设计核心组件
@@ -70,7 +70,7 @@
由于缓存容量有限,我们将使用 LRU(近期最少使用算法)来控制缓存的过期。
-* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
+* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) 的 **Web 服务器**发送一个请求
* 这个 **Web 服务器**将请求转发给**查询 API** 服务
* **查询 API** 服务将会做这些事情:
* 分析查询
@@ -98,33 +98,33 @@
实现**查询 API 服务**:
```python
-class QueryApi(object):
+class QueryApi(object) :
- def __init__(self, memory_cache, reverse_index_service):
+ def __init__(self, memory_cache, reverse_index_service) :
self.memory_cache = memory_cache
self.reverse_index_service = reverse_index_service
- def parse_query(self, query):
+ def parse_query(self, query) :
"""移除多余内容,将文本分割成词组,修复拼写错误,
规范化字母大小写,转换布尔运算。
"""
...
- def process_query(self, query):
- query = self.parse_query(query)
- results = self.memory_cache.get(query)
+ def process_query(self, query) :
+ query = self.parse_query(query)
+ results = self.memory_cache.get(query)
if results is None:
- results = self.reverse_index_service.process_search(query)
- self.memory_cache.set(query, results)
+ results = self.reverse_index_service.process_search(query)
+ self.memory_cache.set(query, results)
return results
```
实现**节点**:
```python
-class Node(object):
+class Node(object) :
- def __init__(self, query, results):
+ def __init__(self, query, results) :
self.query = query
self.results = results
```
@@ -132,34 +132,34 @@ class Node(object):
实现**链表**:
```python
-class LinkedList(object):
+class LinkedList(object) :
- def __init__(self):
+ def __init__(self) :
self.head = None
self.tail = None
- def move_to_front(self, node):
+ def move_to_front(self, node) :
...
- def append_to_front(self, node):
+ def append_to_front(self, node) :
...
- def remove_from_tail(self):
+ def remove_from_tail(self) :
...
```
实现**缓存**:
```python
-class Cache(object):
+class Cache(object) :
- def __init__(self, MAX_SIZE):
+ def __init__(self, MAX_SIZE) :
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
- self.linked_list = LinkedList()
+ self.linked_list = LinkedList()
- def get(self, query)
+ def get(self, query)
"""从缓存取得存储的内容
将入口节点位置更新为 LRU 链表的头部。
@@ -167,10 +167,10 @@ class Cache(object):
node = self.lookup[query]
if node is None:
return None
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
return node.results
- def set(self, results, query):
+ def set(self, results, query) :
"""将所给查询键的结果存在缓存中。
当更新缓存记录的时候,将它的位置指向 LRU 链表的头部。
@@ -181,18 +181,18 @@ class Cache(object):
if node is not None:
# 键存在于缓存中,更新它对应的值
node.results = results
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
else:
# 键不存在于缓存中
if self.size == self.MAX_SIZE:
# 在链表中查找并删除最老的记录
- self.lookup.pop(self.linked_list.tail.query, None)
- self.linked_list.remove_from_tail()
+ self.lookup.pop(self.linked_list.tail.query, None)
+ self.linked_list.remove_from_tail()
else:
self.size += 1
# 添加新的键值对
- new_node = Node(query, results)
- self.linked_list.append_to_front(new_node)
+ new_node = Node(query, results)
+ self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
```
@@ -206,13 +206,13 @@ class Cache(object):
解决这些问题的最直接的方法,就是为缓存记录设置一个它在被更新前能留在缓存中的最长时间,这个时间简称为存活时间(TTL)。
-参考 [「何时更新缓存」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#何时更新缓存)来了解其权衡取舍及替代方案。以上方法在[缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)一章中详细地进行了描述。
+参考 [「何时更新缓存」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#何时更新缓存) 来了解其权衡取舍及替代方案。以上方法在[缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式) 一章中详细地进行了描述。
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
-
+
**重要提示:不要从最初设计直接跳到最终设计中!**
@@ -222,16 +222,16 @@ class Cache(object):
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
-**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
+**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
### 将内存缓存扩大到多台机器
@@ -239,7 +239,7 @@ class Cache(object):
* **缓存集群中的每一台机器都有自己的缓存** - 简单,但是它会降低缓存命中率。
* **缓存集群中的每一台机器都有缓存的拷贝** - 简单,但是它的内存使用效率太低了。
-* **对缓存进行[分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片),分别部署在缓存集群中的所有机器中** - 更加复杂,但是它是最佳的选择。我们可以使用哈希,用查询语句 `machine = hash(query)` 来确定哪台机器有需要缓存。当然我们也可以使用[一致性哈希](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#正在完善中)。
+* **对缓存进行[分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片) ,分别部署在缓存集群中的所有机器中** - 更加复杂,但是它是最佳的选择。我们可以使用哈希,用查询语句 `machine = hash(query) ` 来确定哪台机器有需要缓存。当然我们也可以使用[一致性哈希](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#正在完善中) 。
## 其它要点
@@ -247,58 +247,58 @@ class Cache(object):
### SQL 缩放模式
-* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
+* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
#### NoSQL
-* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
- * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
-请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
+请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
### 延迟数值
-请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续探讨
diff --git a/solutions/system_design/query_cache/README.md b/solutions/system_design/query_cache/README.md
index 032adf34..3494456a 100644
--- a/solutions/system_design/query_cache/README.md
+++ b/solutions/system_design/query_cache/README.md
@@ -58,7 +58,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -70,7 +70,7 @@ Popular queries can be served from a **Memory Cache** such as Redis or Memcached
Since the cache has limited capacity, we'll use a least recently used (LRU) approach to expire older entries.
-* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Query API** server
* The **Query API** server does the following:
* Parses the query
@@ -98,33 +98,33 @@ The cache can use a doubly-linked list: new items will be added to the head whil
**Query API Server** implementation:
```python
-class QueryApi(object):
+class QueryApi(object) :
- def __init__(self, memory_cache, reverse_index_service):
+ def __init__(self, memory_cache, reverse_index_service) :
self.memory_cache = memory_cache
self.reverse_index_service = reverse_index_service
- def parse_query(self, query):
+ def parse_query(self, query) :
"""Remove markup, break text into terms, deal with typos,
normalize capitalization, convert to use boolean operations.
"""
...
- def process_query(self, query):
- query = self.parse_query(query)
- results = self.memory_cache.get(query)
+ def process_query(self, query) :
+ query = self.parse_query(query)
+ results = self.memory_cache.get(query)
if results is None:
- results = self.reverse_index_service.process_search(query)
- self.memory_cache.set(query, results)
+ results = self.reverse_index_service.process_search(query)
+ self.memory_cache.set(query, results)
return results
```
**Node** implementation:
```python
-class Node(object):
+class Node(object) :
- def __init__(self, query, results):
+ def __init__(self, query, results) :
self.query = query
self.results = results
```
@@ -132,34 +132,34 @@ class Node(object):
**LinkedList** implementation:
```python
-class LinkedList(object):
+class LinkedList(object) :
- def __init__(self):
+ def __init__(self) :
self.head = None
self.tail = None
- def move_to_front(self, node):
+ def move_to_front(self, node) :
...
- def append_to_front(self, node):
+ def append_to_front(self, node) :
...
- def remove_from_tail(self):
+ def remove_from_tail(self) :
...
```
**Cache** implementation:
```python
-class Cache(object):
+class Cache(object) :
- def __init__(self, MAX_SIZE):
+ def __init__(self, MAX_SIZE) :
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
- self.linked_list = LinkedList()
+ self.linked_list = LinkedList()
- def get(self, query)
+ def get(self, query)
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
@@ -167,10 +167,10 @@ class Cache(object):
node = self.lookup[query]
if node is None:
return None
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
return node.results
- def set(self, results, query):
+ def set(self, results, query) :
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
@@ -181,18 +181,18 @@ class Cache(object):
if node is not None:
# Key exists in cache, update the value
node.results = results
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
- self.lookup.pop(self.linked_list.tail.query, None)
- self.linked_list.remove_from_tail()
+ self.lookup.pop(self.linked_list.tail.query, None)
+ self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
- new_node = Node(query, results)
- self.linked_list.append_to_front(new_node)
+ new_node = Node(query, results)
+ self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
```
@@ -204,15 +204,15 @@ The cache should be updated when:
* The page is removed or a new page is added
* The page rank changes
-The most straightforward way to handle these cases is to simply set a max time that a cached entry can stay in the cache before it is updated, usually referred to as time to live (TTL).
+The most straightforward way to handle these cases is to simply set a max time that a cached entry can stay in the cache before it is updated, usually referred to as time to live (TTL) .
-Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside).
+Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside) .
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -224,14 +224,14 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
### Expanding the Memory Cache to many machines
@@ -239,7 +239,7 @@ To handle the heavy request load and the large amount of memory needed, we'll sc
* **Each machine in the cache cluster has its own cache** - Simple, although it will likely result in a low cache hit rate.
* **Each machine in the cache cluster has a copy of the cache** - Simple, although it is an inefficient use of memory.
-* **The cache is [sharded](https://github.com/donnemartin/system-design-primer#sharding) across all machines in the cache cluster** - More complex, although it is likely the best option. We could use hashing to determine which machine could have the cached results of a query using `machine = hash(query)`. We'll likely want to use [consistent hashing](https://github.com/donnemartin/system-design-primer#under-development).
+* **The cache is [sharded](https://github.com/donnemartin/system-design-primer#sharding) across all machines in the cache cluster** - More complex, although it is likely the best option. We could use hashing to determine which machine could have the cached results of a query using `machine = hash(query) `. We'll likely want to use [consistent hashing](https://github.com/donnemartin/system-design-primer#under-development) .
## Additional talking points
@@ -247,58 +247,58 @@ To handle the heavy request load and the large amount of memory needed, we'll sc
### SQL scaling patterns
-* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/query_cache/query_cache_snippets.py b/solutions/system_design/query_cache/query_cache_snippets.py
index 19d3f5cd..5998fc56 100644
--- a/solutions/system_design/query_cache/query_cache_snippets.py
+++ b/solutions/system_design/query_cache/query_cache_snippets.py
@@ -1,59 +1,59 @@
# -*- coding: utf-8 -*-
-class QueryApi(object):
+class QueryApi(object) :
- def __init__(self, memory_cache, reverse_index_cluster):
+ def __init__(self, memory_cache, reverse_index_cluster) :
self.memory_cache = memory_cache
self.reverse_index_cluster = reverse_index_cluster
- def parse_query(self, query):
+ def parse_query(self, query) :
"""Remove markup, break text into terms, deal with typos,
normalize capitalization, convert to use boolean operations.
"""
...
- def process_query(self, query):
- query = self.parse_query(query)
- results = self.memory_cache.get(query)
+ def process_query(self, query) :
+ query = self.parse_query(query)
+ results = self.memory_cache.get(query)
if results is None:
- results = self.reverse_index_cluster.process_search(query)
- self.memory_cache.set(query, results)
+ results = self.reverse_index_cluster.process_search(query)
+ self.memory_cache.set(query, results)
return results
-class Node(object):
+class Node(object) :
- def __init__(self, query, results):
+ def __init__(self, query, results) :
self.query = query
self.results = results
-class LinkedList(object):
+class LinkedList(object) :
- def __init__(self):
+ def __init__(self) :
self.head = None
self.tail = None
- def move_to_front(self, node):
+ def move_to_front(self, node) :
...
- def append_to_front(self, node):
+ def append_to_front(self, node) :
...
- def remove_from_tail(self):
+ def remove_from_tail(self) :
...
-class Cache(object):
+class Cache(object) :
- def __init__(self, MAX_SIZE):
+ def __init__(self, MAX_SIZE) :
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {}
- self.linked_list = LinkedList()
+ self.linked_list = LinkedList()
- def get(self, query):
+ def get(self, query) :
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
@@ -61,10 +61,10 @@ class Cache(object):
node = self.lookup[query]
if node is None:
return None
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
return node.results
- def set(self, results, query):
+ def set(self, results, query) :
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
@@ -75,16 +75,16 @@ class Cache(object):
if node is not None:
# Key exists in cache, update the value
node.results = results
- self.linked_list.move_to_front(node)
+ self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
- self.lookup.pop(self.linked_list.tail.query, None)
- self.linked_list.remove_from_tail()
+ self.lookup.pop(self.linked_list.tail.query, None)
+ self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
- new_node = Node(query, results)
- self.linked_list.append_to_front(new_node)
+ new_node = Node(query, results)
+ self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
diff --git a/solutions/system_design/sales_rank/README-zh-Hans.md b/solutions/system_design/sales_rank/README-zh-Hans.md
index 960f9258..4f65aefd 100644
--- a/solutions/system_design/sales_rank/README-zh-Hans.md
+++ b/solutions/system_design/sales_rank/README-zh-Hans.md
@@ -1,6 +1,6 @@
# 为 Amazon 设计分类售卖排行
-**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
+**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@@ -70,7 +70,7 @@
> 列出所有重要组件以规划概要设计。
-
+
## 第三步:设计核心组件
@@ -95,94 +95,94 @@ t5 product4 category1 1 5.00 5 6
...
```
-**售卖排行服务** 需要用到 **MapReduce**,并使用 **售卖 API** 服务进行日志记录,同时将结果写入 **SQL 数据库**中的总表 `sales_rank` 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
+**售卖排行服务** 需要用到 **MapReduce**,并使用 **售卖 API** 服务进行日志记录,同时将结果写入 **SQL 数据库**中的总表 `sales_rank` 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql) 。
我们需要通过以下步骤使用 **MapReduce**:
-* **第 1 步** - 将数据转换为 `(category, product_id), sum(quantity)` 的形式
+* **第 1 步** - 将数据转换为 `(category, product_id) , sum(quantity) ` 的形式
* **第 2 步** - 执行分布式排序
```python
-class SalesRanker(MRJob):
+class SalesRanker(MRJob) :
- def within_past_week(self, timestamp):
+ def within_past_week(self, timestamp) :
"""如果时间戳属于过去的一周则返回 True,
否则返回 False。"""
...
- def mapper(self, _ line):
+ def mapper(self, _ line) :
"""解析日志的每一行,提取并转换相关行,
将键值对设定为如下形式:
- (category1, product1), 2
- (category2, product1), 2
- (category2, product1), 1
- (category1, product2), 3
- (category2, product3), 7
- (category1, product4), 1
+ (category1, product1) , 2
+ (category2, product1) , 2
+ (category2, product1) , 1
+ (category1, product2) , 3
+ (category2, product3) , 7
+ (category1, product4) , 1
"""
timestamp, product_id, category_id, quantity, total_price, seller_id, \
- buyer_id = line.split('\t')
- if self.within_past_week(timestamp):
- yield (category_id, product_id), quantity
+ buyer_id = line.split('\t')
+ if self.within_past_week(timestamp) :
+ yield (category_id, product_id) , quantity
- def reducer(self, key, value):
+ def reducer(self, key, value) :
"""将每个 key 的值加起来。
- (category1, product1), 2
- (category2, product1), 3
- (category1, product2), 3
- (category2, product3), 7
- (category1, product4), 1
+ (category1, product1) , 2
+ (category2, product1) , 3
+ (category1, product2) , 3
+ (category2, product3) , 7
+ (category1, product4) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
- def mapper_sort(self, key, value):
+ def mapper_sort(self, key, value) :
"""构造 key 以确保正确的排序。
将键值对转换成如下形式:
- (category1, 2), product1
- (category2, 3), product1
- (category1, 3), product2
- (category2, 7), product3
- (category1, 1), product4
+ (category1, 2) , product1
+ (category2, 3) , product1
+ (category1, 3) , product2
+ (category2, 7) , product3
+ (category1, 1) , product4
MapReduce 的随机排序步骤会将键
值的排序打乱,变成下面这样:
- (category1, 1), product4
- (category1, 2), product1
- (category1, 3), product2
- (category2, 3), product1
- (category2, 7), product3
+ (category1, 1) , product4
+ (category1, 2) , product1
+ (category1, 3) , product2
+ (category2, 3) , product1
+ (category2, 7) , product3
"""
category_id, product_id = key
quantity = value
- yield (category_id, quantity), product_id
+ yield (category_id, quantity) , product_id
- def reducer_identity(self, key, value):
+ def reducer_identity(self, key, value) :
yield key, value
- def steps(self):
+ def steps(self) :
""" 此处为 map reduce 步骤"""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer),
+ reducer=self.reducer) ,
self.mr(mapper=self.mapper_sort,
- reducer=self.reducer_identity),
+ reducer=self.reducer_identity) ,
]
```
得到的结果将会是如下的排序列,我们将其插入 `sales_rank` 表中:
```
-(category1, 1), product4
-(category1, 2), product1
-(category1, 3), product2
-(category2, 3), product1
-(category2, 7), product3
+(category1, 1) , product4
+(category1, 2) , product1
+(category1, 3) , product2
+(category2, 3) , product1
+(category2, 7) , product3
```
`sales_rank` 表的数据结构如下:
@@ -192,20 +192,20 @@ id int NOT NULL AUTO_INCREMENT
category_id int NOT NULL
total_sold int NOT NULL
product_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(category_id) REFERENCES Categories(id)
-FOREIGN KEY(product_id) REFERENCES Products(id)
+PRIMARY KEY(id)
+FOREIGN KEY(category_id) REFERENCES Categories(id)
+FOREIGN KEY(product_id) REFERENCES Products(id)
```
-我们会以 `id`、`category_id` 与 `product_id` 创建一个 [索引](https://github.com/donnemartin/system-design-primer#use-good-indices)以加快查询速度(只需要使用读取日志的时间,不再需要每次都扫描整个数据表)并让数据常驻内存。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。1
+我们会以 `id`、`category_id` 与 `product_id` 创建一个 [索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 以加快查询速度(只需要使用读取日志的时间,不再需要每次都扫描整个数据表)并让数据常驻内存。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。1
### 用例:用户需要根据分类浏览上周中最受欢迎的商品
-* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
+* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) 的 **Web 服务器**发送一个请求
* 这个 **Web 服务器**将请求转发给**查询 API** 服务
* The **查询 API** 服务将从 **SQL 数据库**的 `sales_rank` 表中读取数据
-我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest):
+我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest) :
```
$ curl https://amazon.com/api/v1/popular?category_id=1234
@@ -234,13 +234,13 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
},
```
-而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
+而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc) 。
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
-
+
**重要提示:不要从最初设计直接跳到最终设计中!**
@@ -250,19 +250,19 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
-**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
+**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
-* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
-* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
+* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
+* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**分析数据库** 可以用现成的数据仓储系统,例如使用 Amazon Redshift 或者 Google BigQuery 的解决方案。
@@ -274,10 +274,10 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
SQL 缩放模式包括:
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**。
@@ -287,50 +287,50 @@ SQL 缩放模式包括:
#### NoSQL
-* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
- * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
-请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
+请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
### 延迟数值
-请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续探讨
diff --git a/solutions/system_design/sales_rank/README.md b/solutions/system_design/sales_rank/README.md
index 71ad1c7d..72ceadef 100644
--- a/solutions/system_design/sales_rank/README.md
+++ b/solutions/system_design/sales_rank/README.md
@@ -70,7 +70,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -95,93 +95,93 @@ t5 product4 category1 1 5.00 5 6
...
```
-The **Sales Rank Service** could use **MapReduce**, using the **Sales API** server log files as input and writing the results to an aggregate table `sales_rank` in a **SQL Database**. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
+The **Sales Rank Service** could use **MapReduce**, using the **Sales API** server log files as input and writing the results to an aggregate table `sales_rank` in a **SQL Database**. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
We'll use a multi-step **MapReduce**:
-* **Step 1** - Transform the data to `(category, product_id), sum(quantity)`
+* **Step 1** - Transform the data to `(category, product_id) , sum(quantity) `
* **Step 2** - Perform a distributed sort
```python
-class SalesRanker(MRJob):
+class SalesRanker(MRJob) :
- def within_past_week(self, timestamp):
+ def within_past_week(self, timestamp) :
"""Return True if timestamp is within past week, False otherwise."""
...
- def mapper(self, _ line):
+ def mapper(self, _ line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (category1, product1), 2
- (category2, product1), 2
- (category2, product1), 1
- (category1, product2), 3
- (category2, product3), 7
- (category1, product4), 1
+ (category1, product1) , 2
+ (category2, product1) , 2
+ (category2, product1) , 1
+ (category1, product2) , 3
+ (category2, product3) , 7
+ (category1, product4) , 1
"""
timestamp, product_id, category_id, quantity, total_price, seller_id, \
- buyer_id = line.split('\t')
- if self.within_past_week(timestamp):
- yield (category_id, product_id), quantity
+ buyer_id = line.split('\t')
+ if self.within_past_week(timestamp) :
+ yield (category_id, product_id) , quantity
- def reducer(self, key, value):
+ def reducer(self, key, value) :
"""Sum values for each key.
- (category1, product1), 2
- (category2, product1), 3
- (category1, product2), 3
- (category2, product3), 7
- (category1, product4), 1
+ (category1, product1) , 2
+ (category2, product1) , 3
+ (category1, product2) , 3
+ (category2, product3) , 7
+ (category1, product4) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
- def mapper_sort(self, key, value):
+ def mapper_sort(self, key, value) :
"""Construct key to ensure proper sorting.
Transform key and value to the form:
- (category1, 2), product1
- (category2, 3), product1
- (category1, 3), product2
- (category2, 7), product3
- (category1, 1), product4
+ (category1, 2) , product1
+ (category2, 3) , product1
+ (category1, 3) , product2
+ (category2, 7) , product3
+ (category1, 1) , product4
The shuffle/sort step of MapReduce will then do a
distributed sort on the keys, resulting in:
- (category1, 1), product4
- (category1, 2), product1
- (category1, 3), product2
- (category2, 3), product1
- (category2, 7), product3
+ (category1, 1) , product4
+ (category1, 2) , product1
+ (category1, 3) , product2
+ (category2, 3) , product1
+ (category2, 7) , product3
"""
category_id, product_id = key
quantity = value
- yield (category_id, quantity), product_id
+ yield (category_id, quantity) , product_id
- def reducer_identity(self, key, value):
+ def reducer_identity(self, key, value) :
yield key, value
- def steps(self):
+ def steps(self) :
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer),
+ reducer=self.reducer) ,
self.mr(mapper=self.mapper_sort,
- reducer=self.reducer_identity),
+ reducer=self.reducer_identity) ,
]
```
The result would be the following sorted list, which we could insert into the `sales_rank` table:
```
-(category1, 1), product4
-(category1, 2), product1
-(category1, 3), product2
-(category2, 3), product1
-(category2, 7), product3
+(category1, 1) , product4
+(category1, 2) , product1
+(category1, 3) , product2
+(category2, 3) , product1
+(category2, 7) , product3
```
The `sales_rank` table could have the following structure:
@@ -191,20 +191,20 @@ id int NOT NULL AUTO_INCREMENT
category_id int NOT NULL
total_sold int NOT NULL
product_id int NOT NULL
-PRIMARY KEY(id)
-FOREIGN KEY(category_id) REFERENCES Categories(id)
-FOREIGN KEY(product_id) REFERENCES Products(id)
+PRIMARY KEY(id)
+FOREIGN KEY(category_id) REFERENCES Categories(id)
+FOREIGN KEY(product_id) REFERENCES Products(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id `, `category_id`, and `product_id` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
### Use case: User views the past week's most popular products by category
-* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Read API** server
* The **Read API** server reads from the **SQL Database** `sales_rank` table
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl https://amazon.com/api/v1/popular?category_id=1234
@@ -233,13 +233,13 @@ Response:
},
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -251,33 +251,33 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
-* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
-* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
+* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Analytics Database** could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
We might only want to store a limited time period of data in the database, while storing the rest in a data warehouse or in an **Object Store**. An **Object Store** such as Amazon S3 can comfortably handle the constraint of 40 GB of new content per month.
-To address the 40,000 *average* read requests per second (higher at peak), traffic for popular content (and their sales rank) should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. With the large volume of reads, the **SQL Read Replicas** might not be able to handle the cache misses. We'll probably need to employ additional SQL scaling patterns.
+To address the 40,000 *average* read requests per second (higher at peak) , traffic for popular content (and their sales rank) should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. With the large volume of reads, the **SQL Read Replicas** might not be able to handle the cache misses. We'll probably need to employ additional SQL scaling patterns.
400 *average* writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**, also pointing to a need for additional scaling techniques.
SQL scaling patterns include:
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@@ -287,50 +287,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/sales_rank/sales_rank_mapreduce.py b/solutions/system_design/sales_rank/sales_rank_mapreduce.py
index 6eeeb525..a0412a24 100644
--- a/solutions/system_design/sales_rank/sales_rank_mapreduce.py
+++ b/solutions/system_design/sales_rank/sales_rank_mapreduce.py
@@ -3,75 +3,75 @@
from mrjob.job import MRJob
-class SalesRanker(MRJob):
+class SalesRanker(MRJob) :
- def within_past_week(self, timestamp):
+ def within_past_week(self, timestamp) :
"""Return True if timestamp is within past week, False otherwise."""
...
- def mapper(self, _, line):
+ def mapper(self, _, line) :
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
- (foo, p1), 2
- (bar, p1), 2
- (bar, p1), 1
- (foo, p2), 3
- (bar, p3), 10
- (foo, p4), 1
+ (foo, p1) , 2
+ (bar, p1) , 2
+ (bar, p1) , 1
+ (foo, p2) , 3
+ (bar, p3) , 10
+ (foo, p4) , 1
"""
- timestamp, product_id, category, quantity = line.split('\t')
- if self.within_past_week(timestamp):
- yield (category, product_id), quantity
+ timestamp, product_id, category, quantity = line.split('\t')
+ if self.within_past_week(timestamp) :
+ yield (category, product_id) , quantity
- def reducer(self, key, values):
+ def reducer(self, key, values) :
"""Sum values for each key.
- (foo, p1), 2
- (bar, p1), 3
- (foo, p2), 3
- (bar, p3), 10
- (foo, p4), 1
+ (foo, p1) , 2
+ (bar, p1) , 3
+ (foo, p2) , 3
+ (bar, p3) , 10
+ (foo, p4) , 1
"""
- yield key, sum(values)
+ yield key, sum(values)
- def mapper_sort(self, key, value):
+ def mapper_sort(self, key, value) :
"""Construct key to ensure proper sorting.
Transform key and value to the form:
- (foo, 2), p1
- (bar, 3), p1
- (foo, 3), p2
- (bar, 10), p3
- (foo, 1), p4
+ (foo, 2) , p1
+ (bar, 3) , p1
+ (foo, 3) , p2
+ (bar, 10) , p3
+ (foo, 1) , p4
The shuffle/sort step of MapReduce will then do a
distributed sort on the keys, resulting in:
- (category1, 1), product4
- (category1, 2), product1
- (category1, 3), product2
- (category2, 3), product1
- (category2, 7), product3
+ (category1, 1) , product4
+ (category1, 2) , product1
+ (category1, 3) , product2
+ (category2, 3) , product1
+ (category2, 7) , product3
"""
category, product_id = key
quantity = value
- yield (category, quantity), product_id
+ yield (category, quantity) , product_id
- def reducer_identity(self, key, value):
+ def reducer_identity(self, key, value) :
yield key, value
- def steps(self):
+ def steps(self) :
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer),
+ reducer=self.reducer) ,
self.mr(mapper=self.mapper_sort,
- reducer=self.reducer_identity),
+ reducer=self.reducer_identity) ,
]
if __name__ == '__main__':
- SalesRanker.run()
+ SalesRanker.run()
diff --git a/solutions/system_design/scaling_aws/README-zh-Hans.md b/solutions/system_design/scaling_aws/README-zh-Hans.md
index c071c70e..050d2416 100644
--- a/solutions/system_design/scaling_aws/README-zh-Hans.md
+++ b/solutions/system_design/scaling_aws/README-zh-Hans.md
@@ -64,7 +64,7 @@
> 用所有重要组件概述高水平设计
-
+
## 第 3 步:设计核心组件
@@ -83,7 +83,7 @@
* **Web 服务器** 在 EC2 上
* 存储用户数据
- * [**MySQL 数据库**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+ * [**MySQL 数据库**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
运用 **纵向扩展**:
@@ -96,7 +96,7 @@
**折中方案, 可选方案, 和其他细节:**
-* **纵向扩展** 的可选方案是 [**横向扩展**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* **纵向扩展** 的可选方案是 [**横向扩展**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
#### 自 SQL 开始,但认真考虑 NoSQL
@@ -104,7 +104,7 @@
**折中方案, 可选方案, 和其他细节:**
-* 查阅 [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
+* 查阅 [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
* 讨论使用 [SQL 或 NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) 的原因
#### 分配公共静态 IP
@@ -139,7 +139,7 @@
### 用户+
-
+
#### 假设
@@ -191,7 +191,7 @@
### 用户+++
-
+
#### 假设
@@ -208,11 +208,11 @@
* 终止在 **负载平衡器** 上的SSL,以减少后端服务器上的计算负载,并简化证书管理
* 在多个可用区域中使用多台 **Web服务器**
* 在多个可用区域的 [**主-从 故障转移**](https://github.com/donnemartin/system-design-primer#master-slave-replication) 模式中使用多个 **MySQL** 实例来改进冗余
-* 分离 **Web 服务器** 和 [**应用服务器**](https://github.com/donnemartin/system-design-primer#application-layer)
+* 分离 **Web 服务器** 和 [**应用服务器**](https://github.com/donnemartin/system-design-primer#application-layer)
* 独立扩展和配置每一层
- * **Web 服务器** 可以作为 [**反向代理**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+ * **Web 服务器** 可以作为 [**反向代理**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* 例如, 你可以添加 **应用服务器** 处理 **读 API** 而另外一些处理 **写 API**
-* 将静态(和一些动态)内容转移到 [**内容分发网络 (CDN)**](https://github.com/donnemartin/system-design-primer#content-delivery-network) 例如 CloudFront 以减少负载和延迟
+* 将静态(和一些动态)内容转移到 [**内容分发网络 (CDN) **](https://github.com/donnemartin/system-design-primer#content-delivery-network) 例如 CloudFront 以减少负载和延迟
**折中方案, 可选方案, 和其他细节:**
@@ -220,7 +220,7 @@
### 用户+++
-
+
**注意:** **内部负载均衡** 不显示以减少混乱
@@ -232,7 +232,7 @@
* 下面的目标试图解决 **MySQL数据库** 的伸缩性问题
* * 基于 **基准/负载测试** 和 **分析**,你可能只需要实现其中的一两个技术
-* 将下列数据移动到一个 [**内存缓存**](https://github.com/donnemartin/system-design-primer#cache),例如弹性缓存,以减少负载和延迟:
+* 将下列数据移动到一个 [**内存缓存**](https://github.com/donnemartin/system-design-primer#cache) ,例如弹性缓存,以减少负载和延迟:
* **MySQL** 中频繁访问的内容
* 首先, 尝试配置 **MySQL 数据库** 缓存以查看是否足以在实现 **内存缓存** 之前缓解瓶颈
* 来自 **Web 服务器** 的会话数据
@@ -254,11 +254,11 @@
**折中方案, 可选方案, 和其他细节:**
-* 查阅 [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
+* 查阅 [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
### 用户++++
-
+
#### 假设
@@ -297,7 +297,7 @@
### 用户+++++
-
+
**注释:** **自动伸缩** 组不显示以减少混乱
@@ -317,10 +317,10 @@
SQL 扩展模型包括:
-* [集合](https://github.com/donnemartin/system-design-primer#federation)
-* [分片](https://github.com/donnemartin/system-design-primer#sharding)
-* [反范式](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [集合](https://github.com/donnemartin/system-design-primer#federation)
+* [分片](https://github.com/donnemartin/system-design-primer#sharding)
+* [反范式](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
为了进一步处理高读和写请求,我们还应该考虑将适当的数据移动到一个 [**NoSQL数据库**](https://github.com/donnemartin/system-design-primer#nosql) ,例如 DynamoDB。
@@ -344,58 +344,58 @@ SQL 扩展模型包括:
### SQL 扩展模式
-* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [集合](https://github.com/donnemartin/system-design-primer#federation)
-* [分区](https://github.com/donnemartin/system-design-primer#sharding)
-* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [集合](https://github.com/donnemartin/system-design-primer#federation)
+* [分区](https://github.com/donnemartin/system-design-primer#sharding)
+* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
-* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
+* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### 缓存
* 缓存到哪里
- * [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
- * [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* 缓存什么
- * [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* 何时更新缓存
- * [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
- * [延迟写 (写回)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
+ * [延迟写 (写回) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### 异步性和微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
-* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
-* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
+* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
+* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
+* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
### 沟通
* 关于折中方案的讨论:
- * 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
### 安全性
-参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
+参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
### 延迟数字指标
-查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
+查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
### 正在进行
diff --git a/solutions/system_design/scaling_aws/README.md b/solutions/system_design/scaling_aws/README.md
index 99af0cff..c6d02df2 100644
--- a/solutions/system_design/scaling_aws/README.md
+++ b/solutions/system_design/scaling_aws/README.md
@@ -64,7 +64,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -83,7 +83,7 @@ Handy conversion guide:
* **Web server** on EC2
* Storage for user data
- * [**MySQL Database**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+ * [**MySQL Database**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
Use **Vertical Scaling**:
@@ -96,7 +96,7 @@ Use **Vertical Scaling**:
*Trade-offs, alternatives, and additional details:*
-* The alternative to **Vertical Scaling** is [**Horizontal scaling**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* The alternative to **Vertical Scaling** is [**Horizontal scaling**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
#### Start with SQL, consider NoSQL
@@ -104,8 +104,8 @@ The constraints assume there is a need for relational data. We can start off us
*Trade-offs, alternatives, and additional details:*
-* See the [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
-* Discuss reasons to use [SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* See the [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
+* Discuss reasons to use [SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
#### Assign a public static IP
@@ -139,7 +139,7 @@ Add a **DNS** such as Route 53 to map the domain to the instance's public IP.
### Users+
-
+
#### Assumptions
@@ -191,7 +191,7 @@ We've been able to address these issues with **Vertical Scaling** so far. Unfor
### Users++
-
+
#### Assumptions
@@ -208,11 +208,11 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our single **Web Serve
* Terminate SSL on the **Load Balancer** to reduce computational load on backend servers and to simplify certificate administration
* Use multiple **Web Servers** spread out over multiple availability zones
* Use multiple **MySQL** instances in [**Master-Slave Failover**](https://github.com/donnemartin/system-design-primer#master-slave-replication) mode across multiple availability zones to improve redundancy
-* Separate out the **Web Servers** from the [**Application Servers**](https://github.com/donnemartin/system-design-primer#application-layer)
+* Separate out the **Web Servers** from the [**Application Servers**](https://github.com/donnemartin/system-design-primer#application-layer)
* Scale and configure both layers independently
- * **Web Servers** can run as a [**Reverse Proxy**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+ * **Web Servers** can run as a [**Reverse Proxy**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* For example, you can add **Application Servers** handling **Read APIs** while others handle **Write APIs**
-* Move static (and some dynamic) content to a [**Content Delivery Network (CDN)**](https://github.com/donnemartin/system-design-primer#content-delivery-network) such as CloudFront to reduce load and latency
+* Move static (and some dynamic) content to a [**Content Delivery Network (CDN) **](https://github.com/donnemartin/system-design-primer#content-delivery-network) such as CloudFront to reduce load and latency
*Trade-offs, alternatives, and additional details:*
@@ -220,7 +220,7 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our single **Web Serve
### Users+++
-
+
**Note:** **Internal Load Balancers** not shown to reduce clutter
@@ -249,16 +249,16 @@ Our **Benchmarks/Load Tests** and **Profiling** show that we are read-heavy (100
* In addition to adding and scaling a **Memory Cache**, **MySQL Read Replicas** can also help relieve load on the **MySQL Write Master**
* Add logic to **Web Server** to separate out writes and reads
-* Add **Load Balancers** in front of **MySQL Read Replicas** (not pictured to reduce clutter)
+* Add **Load Balancers** in front of **MySQL Read Replicas** (not pictured to reduce clutter)
* Most services are read-heavy vs write-heavy
*Trade-offs, alternatives, and additional details:*
-* See the [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
+* See the [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
### Users++++
-
+
#### Assumptions
@@ -297,7 +297,7 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our traffic spikes dur
### Users+++++
-
+
**Note:** **Autoscaling** groups not shown to reduce clutter
@@ -317,10 +317,10 @@ We'll continue to address scaling issues due to the problem's constraints:
SQL scaling patterns include:
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
To further address the high read and write requests, we should also consider moving appropriate data to a [**NoSQL Database**](https://github.com/donnemartin/system-design-primer#nosql) such as DynamoDB.
@@ -344,58 +344,58 @@ We can further separate out our [**Application Servers**](https://github.com/don
### SQL scaling patterns
-* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/social_graph/README-zh-Hans.md b/solutions/system_design/social_graph/README-zh-Hans.md
index 07b8e3e7..dc2f19ed 100644
--- a/solutions/system_design/social_graph/README-zh-Hans.md
+++ b/solutions/system_design/social_graph/README-zh-Hans.md
@@ -29,7 +29,7 @@
* 每个用户平均有 50 个朋友
* 每月 10 亿次朋友搜索
-训练使用更传统的系统 - 别用图特有的解决方案例如 [GraphQL](http://graphql.org/) 或图数据库如 [Neo4j](https://neo4j.com/)。
+训练使用更传统的系统 - 别用图特有的解决方案例如 [GraphQL](http://graphql.org/) 或图数据库如 [Neo4j](https://neo4j.com/) 。
#### 计算使用
@@ -50,7 +50,7 @@
> 用所有重要组件概述高水平设计
-
+
## 第 3 步:设计核心组件
@@ -63,37 +63,37 @@
没有百万用户(点)的和十亿朋友关系(边)的限制,我们能够用一般 BFS 方法解决无权重最短路径任务:
```python
-class Graph(Graph):
+class Graph(Graph) :
- def shortest_path(self, source, dest):
+ def shortest_path(self, source, dest) :
if source is None or dest is None:
return None
if source is dest:
return [source.key]
- prev_node_keys = self._shortest_path(source, dest)
+ prev_node_keys = self._shortest_path(source, dest)
if prev_node_keys is None:
return None
else:
path_ids = [dest.key]
prev_node_key = prev_node_keys[dest.key]
while prev_node_key is not None:
- path_ids.append(prev_node_key)
+ path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
return path_ids[::-1]
- def _shortest_path(self, source, dest):
- queue = deque()
- queue.append(source)
+ def _shortest_path(self, source, dest) :
+ queue = deque()
+ queue.append(source)
prev_node_keys = {source.key: None}
source.visit_state = State.visited
while queue:
- node = queue.popleft()
+ node = queue.popleft()
if node is dest:
return prev_node_keys
prev_node = node
- for adj_node in node.adj_nodes.values():
+ for adj_node in node.adj_nodes.values() :
if adj_node.visit_state == State.unvisited:
- queue.append(adj_node)
+ queue.append(adj_node)
prev_node_keys[adj_node.key] = prev_node.key
adj_node.visit_state = State.visited
return None
@@ -101,7 +101,7 @@ class Graph(Graph):
我们不能在同一台机器上满足所有用户,我们需要通过 **人员服务器** [拆分](https://github.com/donnemartin/system-design-primer#sharding) 用户并且通过 **查询服务** 访问。
-* **客户端** 向 **服务器** 发送请求,**服务器** 作为 [反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* **客户端** 向 **服务器** 发送请求,**服务器** 作为 [反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* **搜索 API** 服务器向 **用户图服务** 转发请求
* **用户图服务** 有以下功能:
* 使用 **查询服务** 找到当前用户信息存储的 **人员服务器**
@@ -117,43 +117,43 @@ class Graph(Graph):
**查询服务** 实现:
```python
-class LookupService(object):
+class LookupService(object) :
- def __init__(self):
- self.lookup = self._init_lookup() # key: person_id, value: person_server
+ def __init__(self) :
+ self.lookup = self._init_lookup() # key: person_id, value: person_server
- def _init_lookup(self):
+ def _init_lookup(self) :
...
- def lookup_person_server(self, person_id):
+ def lookup_person_server(self, person_id) :
return self.lookup[person_id]
```
**人员服务器** 实现:
```python
-class PersonServer(object):
+class PersonServer(object) :
- def __init__(self):
+ def __init__(self) :
self.people = {} # key: person_id, value: person
- def add_person(self, person):
+ def add_person(self, person) :
...
- def people(self, ids):
+ def people(self, ids) :
results = []
for id in ids:
if id in self.people:
- results.append(self.people[id])
+ results.append(self.people[id])
return results
```
**用户** 实现:
```python
-class Person(object):
+class Person(object) :
- def __init__(self, id, name, friend_ids):
+ def __init__(self, id, name, friend_ids) :
self.id = id
self.name = name
self.friend_ids = friend_ids
@@ -162,21 +162,21 @@ class Person(object):
**用户图服务** 实现:
```python
-class UserGraphService(object):
+class UserGraphService(object) :
- def __init__(self, lookup_service):
+ def __init__(self, lookup_service) :
self.lookup_service = lookup_service
- def person(self, person_id):
- person_server = self.lookup_service.lookup_person_server(person_id)
- return person_server.people([person_id])
+ def person(self, person_id) :
+ person_server = self.lookup_service.lookup_person_server(person_id)
+ return person_server.people([person_id])
- def shortest_path(self, source_key, dest_key):
+ def shortest_path(self, source_key, dest_key) :
if source_key is None or dest_key is None:
return None
if source_key is dest_key:
return [source_key]
- prev_node_keys = self._shortest_path(source_key, dest_key)
+ prev_node_keys = self._shortest_path(source_key, dest_key)
if prev_node_keys is None:
return None
else:
@@ -184,40 +184,40 @@ class UserGraphService(object):
path_ids = [dest_key]
prev_node_key = prev_node_keys[dest_key]
while prev_node_key is not None:
- path_ids.append(prev_node_key)
+ path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
# Reverse the list since we iterated backwards
return path_ids[::-1]
- def _shortest_path(self, source_key, dest_key, path):
+ def _shortest_path(self, source_key, dest_key, path) :
# Use the id to get the Person
- source = self.person(source_key)
+ source = self.person(source_key)
# Update our bfs queue
- queue = deque()
- queue.append(source)
+ queue = deque()
+ queue.append(source)
# prev_node_keys keeps track of each hop from
# the source_key to the dest_key
prev_node_keys = {source_key: None}
# We'll use visited_ids to keep track of which nodes we've
# visited, which can be different from a typical bfs where
# this can be stored in the node itself
- visited_ids = set()
- visited_ids.add(source.id)
+ visited_ids = set()
+ visited_ids.add(source.id)
while queue:
- node = queue.popleft()
+ node = queue.popleft()
if node.key is dest_key:
return prev_node_keys
prev_node = node
for friend_id in node.friend_ids:
if friend_id not in visited_ids:
- friend_node = self.person(friend_id)
- queue.append(friend_node)
+ friend_node = self.person(friend_id)
+ queue.append(friend_node)
prev_node_keys[friend_id] = prev_node.key
- visited_ids.add(friend_id)
+ visited_ids.add(friend_id)
return None
```
-我们用的是公共的 [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+我们用的是公共的 [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl https://social.com/api/v1/friend_search?person_id=1234
@@ -243,13 +243,13 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
},
```
-内部通信使用 [远端过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)。
+内部通信使用 [远端过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) 。
## 第 4 步:扩展设计
> 在给定约束条件下,定义和确认瓶颈。
-
+
**重要:别简化从最初设计到最终设计的过程!**
@@ -261,14 +261,14 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
**避免重复讨论**,以下网址链接到 [系统设计主题](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) 相关的主流方案、折中方案和替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [负载均衡](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [横向扩展](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [缓存](https://github.com/donnemartin/system-design-primer#cache)
-* [一致性模式](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [可用性模式](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [负载均衡](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [横向扩展](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer#application-layer)
+* [缓存](https://github.com/donnemartin/system-design-primer#cache)
+* [一致性模式](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [可用性模式](https://github.com/donnemartin/system-design-primer#availability-patterns)
解决 **平均** 每秒 400 次请求的限制(峰值),人员数据可以存在例如 Redis 或 Memcached 这样的 **内存** 中以减少响应次数和下游流量通信服务。这尤其在用户执行多次连续查询和查询哪些广泛连接的人时十分有用。从内存中读取 1MB 数据大约要 250 微秒,从 SSD 中读取同样大小的数据时间要长 4 倍,从硬盘要长 80 倍。1
@@ -279,9 +279,9 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
* 在同一台 **人员服务器** 上托管批处理同一批朋友查找减少机器跳转
* 通过地理位置 [拆分](https://github.com/donnemartin/system-design-primer#sharding) **人员服务器** 来进一步优化,因为朋友通常住得都比较近
* 同时进行两个 BFS 查找,一个从 source 开始,一个从 destination 开始,然后合并两个路径
-* 从有庞大朋友圈的人开始找起,这样更有可能减小当前用户和搜索目标之间的 [离散度数](https://en.wikipedia.org/wiki/Six_degrees_of_separation)
+* 从有庞大朋友圈的人开始找起,这样更有可能减小当前用户和搜索目标之间的 [离散度数](https://en.wikipedia.org/wiki/Six_degrees_of_separation)
* 在询问用户是否继续查询之前设置基于时间或跳跃数阈值,当在某些案例中搜索耗费时间过长时。
-* 使用类似 [Neo4j](https://neo4j.com/) 的 **图数据库** 或图特定查询语法,例如 [GraphQL](http://graphql.org/)(如果没有禁止使用 **图数据库** 的限制的话)
+* 使用类似 [Neo4j](https://neo4j.com/) 的 **图数据库** 或图特定查询语法,例如 [GraphQL](http://graphql.org/) (如果没有禁止使用 **图数据库** 的限制的话)
## 额外的话题
@@ -289,58 +289,58 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
### SQL 扩展模式
-* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [集合](https://github.com/donnemartin/system-design-primer#federation)
-* [分区](https://github.com/donnemartin/system-design-primer#sharding)
-* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [集合](https://github.com/donnemartin/system-design-primer#federation)
+* [分区](https://github.com/donnemartin/system-design-primer#sharding)
+* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
-* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
+* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### 缓存
* 缓存到哪里
- * [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
- * [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* 缓存什么
- * [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* 何时更新缓存
- * [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
- * [延迟写 (写回)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
+ * [延迟写 (写回) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### 异步性和微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
-* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
-* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
+* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
+* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
+* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
### 沟通
* 关于折中方案的讨论:
- * 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
### 安全性
-参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
+参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
### 延迟数字指标
-查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
+查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
### 正在进行
diff --git a/solutions/system_design/social_graph/README.md b/solutions/system_design/social_graph/README.md
index f7dfd4ef..a18499bf 100644
--- a/solutions/system_design/social_graph/README.md
+++ b/solutions/system_design/social_graph/README.md
@@ -29,7 +29,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* 50 friends per user average
* 1 billion friend searches per month
-Exercise the use of more traditional systems - don't use graph-specific solutions such as [GraphQL](http://graphql.org/) or a graph database like [Neo4j](https://neo4j.com/)
+Exercise the use of more traditional systems - don't use graph-specific solutions such as [GraphQL](http://graphql.org/) or a graph database like [Neo4j](https://neo4j.com/)
#### Calculate usage
@@ -50,7 +50,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -60,40 +60,40 @@ Handy conversion guide:
**Clarify with your interviewer how much code you are expected to write**.
-Without the constraint of millions of users (vertices) and billions of friend relationships (edges), we could solve this unweighted shortest path task with a general BFS approach:
+Without the constraint of millions of users (vertices) and billions of friend relationships (edges) , we could solve this unweighted shortest path task with a general BFS approach:
```python
-class Graph(Graph):
+class Graph(Graph) :
- def shortest_path(self, source, dest):
+ def shortest_path(self, source, dest) :
if source is None or dest is None:
return None
if source is dest:
return [source.key]
- prev_node_keys = self._shortest_path(source, dest)
+ prev_node_keys = self._shortest_path(source, dest)
if prev_node_keys is None:
return None
else:
path_ids = [dest.key]
prev_node_key = prev_node_keys[dest.key]
while prev_node_key is not None:
- path_ids.append(prev_node_key)
+ path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
return path_ids[::-1]
- def _shortest_path(self, source, dest):
- queue = deque()
- queue.append(source)
+ def _shortest_path(self, source, dest) :
+ queue = deque()
+ queue.append(source)
prev_node_keys = {source.key: None}
source.visit_state = State.visited
while queue:
- node = queue.popleft()
+ node = queue.popleft()
if node is dest:
return prev_node_keys
prev_node = node
- for adj_node in node.adj_nodes.values():
+ for adj_node in node.adj_nodes.values() :
if adj_node.visit_state == State.unvisited:
- queue.append(adj_node)
+ queue.append(adj_node)
prev_node_keys[adj_node.key] = prev_node.key
adj_node.visit_state = State.visited
return None
@@ -101,7 +101,7 @@ class Graph(Graph):
We won't be able to fit all users on the same machine, we'll need to [shard](https://github.com/donnemartin/system-design-primer#sharding) users across **Person Servers** and access them with a **Lookup Service**.
-* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Search API** server
* The **Search API** server forwards the request to the **User Graph Service**
* The **User Graph Service** does the following:
@@ -109,7 +109,7 @@ We won't be able to fit all users on the same machine, we'll need to [shard](htt
* Finds the appropriate **Person Server** to retrieve the current user's list of `friend_ids`
* Runs a BFS search using the current user as the `source` and the current user's `friend_ids` as the ids for each `adjacent_node`
* To get the `adjacent_node` from a given id:
- * The **User Graph Service** will *again* need to communicate with the **Lookup Service** to determine which **Person Server** stores the`adjacent_node` matching the given id (potential for optimization)
+ * The **User Graph Service** will *again* need to communicate with the **Lookup Service** to determine which **Person Server** stores the`adjacent_node` matching the given id (potential for optimization)
**Clarify with your interviewer how much code you should be writing**.
@@ -118,43 +118,43 @@ We won't be able to fit all users on the same machine, we'll need to [shard](htt
**Lookup Service** implementation:
```python
-class LookupService(object):
+class LookupService(object) :
- def __init__(self):
- self.lookup = self._init_lookup() # key: person_id, value: person_server
+ def __init__(self) :
+ self.lookup = self._init_lookup() # key: person_id, value: person_server
- def _init_lookup(self):
+ def _init_lookup(self) :
...
- def lookup_person_server(self, person_id):
+ def lookup_person_server(self, person_id) :
return self.lookup[person_id]
```
**Person Server** implementation:
```python
-class PersonServer(object):
+class PersonServer(object) :
- def __init__(self):
+ def __init__(self) :
self.people = {} # key: person_id, value: person
- def add_person(self, person):
+ def add_person(self, person) :
...
- def people(self, ids):
+ def people(self, ids) :
results = []
for id in ids:
if id in self.people:
- results.append(self.people[id])
+ results.append(self.people[id])
return results
```
**Person** implementation:
```python
-class Person(object):
+class Person(object) :
- def __init__(self, id, name, friend_ids):
+ def __init__(self, id, name, friend_ids) :
self.id = id
self.name = name
self.friend_ids = friend_ids
@@ -163,21 +163,21 @@ class Person(object):
**User Graph Service** implementation:
```python
-class UserGraphService(object):
+class UserGraphService(object) :
- def __init__(self, lookup_service):
+ def __init__(self, lookup_service) :
self.lookup_service = lookup_service
- def person(self, person_id):
- person_server = self.lookup_service.lookup_person_server(person_id)
- return person_server.people([person_id])
+ def person(self, person_id) :
+ person_server = self.lookup_service.lookup_person_server(person_id)
+ return person_server.people([person_id])
- def shortest_path(self, source_key, dest_key):
+ def shortest_path(self, source_key, dest_key) :
if source_key is None or dest_key is None:
return None
if source_key is dest_key:
return [source_key]
- prev_node_keys = self._shortest_path(source_key, dest_key)
+ prev_node_keys = self._shortest_path(source_key, dest_key)
if prev_node_keys is None:
return None
else:
@@ -185,40 +185,40 @@ class UserGraphService(object):
path_ids = [dest_key]
prev_node_key = prev_node_keys[dest_key]
while prev_node_key is not None:
- path_ids.append(prev_node_key)
+ path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
# Reverse the list since we iterated backwards
return path_ids[::-1]
- def _shortest_path(self, source_key, dest_key, path):
+ def _shortest_path(self, source_key, dest_key, path) :
# Use the id to get the Person
- source = self.person(source_key)
+ source = self.person(source_key)
# Update our bfs queue
- queue = deque()
- queue.append(source)
+ queue = deque()
+ queue.append(source)
# prev_node_keys keeps track of each hop from
# the source_key to the dest_key
prev_node_keys = {source_key: None}
# We'll use visited_ids to keep track of which nodes we've
# visited, which can be different from a typical bfs where
# this can be stored in the node itself
- visited_ids = set()
- visited_ids.add(source.id)
+ visited_ids = set()
+ visited_ids.add(source.id)
while queue:
- node = queue.popleft()
+ node = queue.popleft()
if node.key is dest_key:
return prev_node_keys
prev_node = node
for friend_id in node.friend_ids:
if friend_id not in visited_ids:
- friend_node = self.person(friend_id)
- queue.append(friend_node)
+ friend_node = self.person(friend_id)
+ queue.append(friend_node)
prev_node_keys[friend_id] = prev_node.key
- visited_ids.add(friend_id)
+ visited_ids.add(friend_id)
return None
```
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl https://social.com/api/v1/friend_search?person_id=1234
@@ -244,13 +244,13 @@ Response:
},
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -262,16 +262,16 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
-To address the constraint of 400 *average* read requests per second (higher at peak), person data can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to reduce traffic to downstream services. This could be especially useful for people who do multiple searches in succession and for people who are well-connected. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
+To address the constraint of 400 *average* read requests per second (higher at peak) , person data can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to reduce traffic to downstream services. This could be especially useful for people who do multiple searches in succession and for people who are well-connected. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
Below are further optimizations:
@@ -282,7 +282,7 @@ Below are further optimizations:
* Do two BFS searches at the same time, one starting from the source, and one from the destination, then merge the two paths
* Start the BFS search from people with large numbers of friends, as they are more likely to reduce the number of [degrees of separation](https://en.wikipedia.org/wiki/Six_degrees_of_separation) between the current user and the search target
* Set a limit based on time or number of hops before asking the user if they want to continue searching, as searching could take a considerable amount of time in some cases
-* Use a **Graph Database** such as [Neo4j](https://neo4j.com/) or a graph-specific query language such as [GraphQL](http://graphql.org/) (if there were no constraint preventing the use of **Graph Databases**)
+* Use a **Graph Database** such as [Neo4j](https://neo4j.com/) or a graph-specific query language such as [GraphQL](http://graphql.org/) (if there were no constraint preventing the use of **Graph Databases**)
## Additional talking points
@@ -290,58 +290,58 @@ Below are further optimizations:
### SQL scaling patterns
-* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/social_graph/social_graph_snippets.py b/solutions/system_design/social_graph/social_graph_snippets.py
index f0ea4c4b..495eb781 100644
--- a/solutions/system_design/social_graph/social_graph_snippets.py
+++ b/solutions/system_design/social_graph/social_graph_snippets.py
@@ -3,70 +3,70 @@ from collections import deque
from enum import Enum
-class State(Enum):
+class State(Enum) :
unvisited = 0
visited = 1
-class Graph(object):
+class Graph(object) :
- def bfs(self, source, dest):
+ def bfs(self, source, dest) :
if source is None:
return False
- queue = deque()
- queue.append(source)
+ queue = deque()
+ queue.append(source)
source.visit_state = State.visited
while queue:
- node = queue.popleft()
- print(node)
+ node = queue.popleft()
+ print(node)
if dest is node:
return True
- for adjacent_node in node.adj_nodes.values():
+ for adjacent_node in node.adj_nodes.values() :
if adjacent_node.visit_state == State.unvisited:
- queue.append(adjacent_node)
+ queue.append(adjacent_node)
adjacent_node.visit_state = State.visited
return False
-class Person(object):
+class Person(object) :
- def __init__(self, id, name):
+ def __init__(self, id, name) :
self.id = id
self.name = name
self.friend_ids = []
-class LookupService(object):
+class LookupService(object) :
- def __init__(self):
+ def __init__(self) :
self.lookup = {} # key: person_id, value: person_server
- def get_person(self, person_id):
+ def get_person(self, person_id) :
person_server = self.lookup[person_id]
return person_server.people[person_id]
-class PersonServer(object):
+class PersonServer(object) :
- def __init__(self):
+ def __init__(self) :
self.people = {} # key: person_id, value: person
- def get_people(self, ids):
+ def get_people(self, ids) :
results = []
for id in ids:
if id in self.people:
- results.append(self.people[id])
+ results.append(self.people[id])
return results
-class UserGraphService(object):
+class UserGraphService(object) :
- def __init__(self, person_ids, lookup):
+ def __init__(self, person_ids, lookup) :
self.lookup = lookup
self.person_ids = person_ids
- self.visited_ids = set()
+ self.visited_ids = set()
- def bfs(self, source, dest):
+ def bfs(self, source, dest) :
# Use self.visited_ids to track visited nodes
# Use self.lookup to translate a person_id to a Person
pass
diff --git a/solutions/system_design/twitter/README-zh-Hans.md b/solutions/system_design/twitter/README-zh-Hans.md
index 1853444d..92cd48e7 100644
--- a/solutions/system_design/twitter/README-zh-Hans.md
+++ b/solutions/system_design/twitter/README-zh-Hans.md
@@ -1,6 +1,6 @@
# 设计推特时间轴与搜索功能
-**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
+**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**设计 Facebook 的 feed** 与**设计 Facebook 搜索**与此为同一类型问题。
@@ -74,11 +74,11 @@
* 每条推特 10 KB * 每天 5 亿条推特 * 每月 30 天
* 3 年产生新推特的内容为 5.4 PB
* 每秒需要处理 10 万次读取请求
- * 每个月需要处理 2500 亿次请求 * (每秒 400 次请求 / 每月 10 亿次请求)
+ * 每个月需要处理 2500 亿次请求 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每秒发布 6000 条推特
- * 每月发布 150 亿条推特 * (每秒 400 次请求 / 每月 10 次请求)
+ * 每月发布 150 亿条推特 * (每秒 400 次请求 / 每月 10 次请求)
* 每秒推送 6 万条推特
- * 每月推送 1500 亿条推特 * (每秒 400 次请求 / 每月 10 亿次请求)
+ * 每月推送 1500 亿条推特 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每秒 4000 次搜索请求
便利换算指南:
@@ -92,7 +92,7 @@
> 列出所有重要组件以规划概要设计。
-
+
## 第三步:设计核心组件
@@ -100,13 +100,13 @@
### 用例:用户发表了一篇推特
-我们可以将用户自己发表的推特存储在[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
+我们可以将用户自己发表的推特存储在[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql) 。
-构建用户主页时间轴(查看关注用户的活动)以及推送推特是件麻烦事。将特推传播给所有关注者(每秒约递送 6 万条推特)这一操作有可能会使传统的[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)超负载。因此,我们可以使用 **NoSQL 数据库**或**内存数据库**之类的更快的数据存储方式。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。1
+构建用户主页时间轴(查看关注用户的活动)以及推送推特是件麻烦事。将特推传播给所有关注者(每秒约递送 6 万条推特)这一操作有可能会使传统的[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 超负载。因此,我们可以使用 **NoSQL 数据库**或**内存数据库**之类的更快的数据存储方式。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。1
我们可以将照片、视频之类的媒体存储于**对象存储**中。
-* **客户端**向应用[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)的**Web 服务器**发送一条推特
+* **客户端**向应用[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) 的**Web 服务器**发送一条推特
* **Web 服务器**将请求转发给**写 API**服务器
* **写 API**服务器将推特使用 **SQL 数据库**存储于用户时间轴中
* **写 API**调用**消息输出服务**,进行以下操作:
@@ -130,7 +130,7 @@
新发布的推特将被存储在对应用户(关注且活跃的用户)的主页时间轴的**内存缓存**中。
-我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest):
+我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest) :
```
$ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
@@ -150,16 +150,16 @@ $ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
}
```
-而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
+而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc) 。
### 用例:用户浏览主页时间轴
* **客户端**向 **Web 服务器**发起一次读取主页时间轴的请求
* **Web 服务器**将请求转发给**读取 API**服务器
* **读取 API**服务器调用**时间轴服务**进行以下操作:
- * 从**内存缓存**读取时间轴数据,其中包括推特 id 与用户 id - O(1)
- * 通过 [multiget](http://redis.io/commands/mget) 向**推特信息服务**进行查询,以获取相关 id 推特的额外信息 - O(n)
- * 通过 muiltiget 向**用户信息服务**进行查询,以获取相关 id 用户的额外信息 - O(n)
+ * 从**内存缓存**读取时间轴数据,其中包括推特 id 与用户 id - O(1)
+ * 通过 [multiget](http://redis.io/commands/mget) 向**推特信息服务**进行查询,以获取相关 id 推特的额外信息 - O(n)
+ * 通过 muiltiget 向**用户信息服务**进行查询,以获取相关 id 用户的额外信息 - O(n)
REST API:
@@ -206,8 +206,8 @@ REST API 与前面的主页时间轴类似,区别只在于取出的推特是
* 修正拼写错误
* 规范字母大小写
* 将查询转换为布尔操作
- * 查询**搜索集群**(例如[Lucene](https://lucene.apache.org/))检索结果:
- * 对集群内的所有服务器进行查询,将有结果的查询进行[发散聚合(Scatter gathers)](https://github.com/donnemartin/system-design-primer#under-development)
+ * 查询**搜索集群**(例如[Lucene](https://lucene.apache.org/) )检索结果:
+ * 对集群内的所有服务器进行查询,将有结果的查询进行[发散聚合(Scatter gathers)](https://github.com/donnemartin/system-design-primer#under-development)
* 合并取到的条目,进行评分与排序,最终返回结果
REST API:
@@ -222,7 +222,7 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
> 根据限制条件,找到并解决瓶颈。
-
+
**重要提示:不要从最初设计直接跳到最终设计中!**
@@ -232,19 +232,19 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
-**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
+**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
-* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
-* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [反向代理(web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
+* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
+* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**消息输出服务**有可能成为性能瓶颈。那些有着百万数量关注着的用户可能发一条推特就需要好几分钟才能完成消息输出进程。这有可能使 @回复 这种推特时出现竞争条件,因此需要根据服务时间对此推特进行重排序来降低影响。
@@ -267,10 +267,10 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
高容量的写入将淹没单个的 **SQL 写主从**模式,因此需要更多的拓展技术。
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**。
@@ -280,50 +280,50 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
#### NoSQL
-* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
- * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
-请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
+请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
### 延迟数值
-请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续探讨
diff --git a/solutions/system_design/twitter/README.md b/solutions/system_design/twitter/README.md
index d14996f1..594977d4 100644
--- a/solutions/system_design/twitter/README.md
+++ b/solutions/system_design/twitter/README.md
@@ -18,8 +18,8 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* **User** posts a tweet
* **Service** pushes tweets to followers, sending push notifications and emails
-* **User** views the user timeline (activity from the user)
-* **User** views the home timeline (activity from people the user is following)
+* **User** views the user timeline (activity from the user)
+* **User** views the home timeline (activity from people the user is following)
* **User** searches keywords
* **Service** has high availability
@@ -74,13 +74,13 @@ Search
* 10 KB per tweet * 500 million tweets per day * 30 days per month
* 5.4 PB of new tweet content in 3 years
* 100 thousand read requests per second
- * 250 billion read requests per month * (400 requests per second / 1 billion requests per month)
+ * 250 billion read requests per month * (400 requests per second / 1 billion requests per month)
* 6,000 tweets per second
- * 15 billion tweets per month * (400 requests per second / 1 billion requests per month)
+ * 15 billion tweets per month * (400 requests per second / 1 billion requests per month)
* 60 thousand tweets delivered on fanout per second
- * 150 billion tweets delivered on fanout per month * (400 requests per second / 1 billion requests per month)
+ * 150 billion tweets delivered on fanout per month * (400 requests per second / 1 billion requests per month)
* 4,000 search requests per second
- * 10 billion searches per month * (400 requests per second / 1 billion requests per month)
+ * 10 billion searches per month * (400 requests per second / 1 billion requests per month)
Handy conversion guide:
@@ -93,7 +93,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -101,13 +101,13 @@ Handy conversion guide:
### Use case: User posts a tweet
-We could store the user's own tweets to populate the user timeline (activity from the user) in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
+We could store the user's own tweets to populate the user timeline (activity from the user) in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
-Delivering tweets and building the home timeline (activity from people the user is following) is trickier. Fanning out tweets to all followers (60 thousand tweets delivered on fanout per second) will overload a traditional [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We'll probably want to choose a data store with fast writes such as a **NoSQL database** or **Memory Cache**. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
+Delivering tweets and building the home timeline (activity from people the user is following) is trickier. Fanning out tweets to all followers (60 thousand tweets delivered on fanout per second) will overload a traditional [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We'll probably want to choose a data store with fast writes such as a **NoSQL database** or **Memory Cache**. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
We could store media such as photos or videos on an **Object Store**.
-* The **Client** posts a tweet to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** posts a tweet to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Write API** server
* The **Write API** stores the tweet in the user's timeline on a **SQL database**
* The **Write API** contacts the **Fan Out Service**, which does the following:
@@ -129,9 +129,9 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll
| tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta |
```
-The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following).
+The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following) .
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
@@ -151,16 +151,16 @@ Response:
}
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
### Use case: User views the home timeline
* The **Client** posts a home timeline request to the **Web Server**
* The **Web Server** forwards the request to the **Read API** server
* The **Read API** server contacts the **Timeline Service**, which does the following:
- * Gets the timeline data stored in the **Memory Cache**, containing tweet ids and user ids - O(1)
- * Queries the **Tweet Info Service** with a [multiget](http://redis.io/commands/mget) to obtain additional info about the tweet ids - O(n)
- * Queries the **User Info Service** with a multiget to obtain additional info about the user ids - O(n)
+ * Gets the timeline data stored in the **Memory Cache**, containing tweet ids and user ids - O(1)
+ * Queries the **Tweet Info Service** with a [multiget](http://redis.io/commands/mget) to obtain additional info about the tweet ids - O(n)
+ * Queries the **User Info Service** with a multiget to obtain additional info about the user ids - O(n)
REST API:
@@ -223,7 +223,7 @@ The response would be similar to that of the home timeline, except for tweets ma
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -235,18 +235,18 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
-* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
-* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
+* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
+* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Fanout Service** is a potential bottleneck. Twitter users with millions of followers could take several minutes to have their tweets go through the fanout process. This could lead to race conditions with @replies to the tweet, which we could mitigate by re-ordering the tweets at serve time.
@@ -269,10 +269,10 @@ Although the **Memory Cache** should reduce the load on the database, it is unli
The high volume of writes would overwhelm a single **SQL Write Master-Slave**, also pointing to a need for additional scaling techniques.
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@@ -282,50 +282,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/web_crawler/README-zh-Hans.md b/solutions/system_design/web_crawler/README-zh-Hans.md
index 2ad0938e..e552e3b2 100644
--- a/solutions/system_design/web_crawler/README-zh-Hans.md
+++ b/solutions/system_design/web_crawler/README-zh-Hans.md
@@ -1,6 +1,6 @@
# 设计一个网页爬虫
-**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
+**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@@ -67,7 +67,7 @@
> 列出所有重要组件以规划概要设计。
-
+
## 第三步:设计核心组件
@@ -75,11 +75,11 @@
### 用例:爬虫服务抓取一系列网页
-假设我们有一个初始列表 `links_to_crawl`(待抓取链接),它最初基于网站整体的知名度来排序。当然如果这个假设不合理,我们可以使用 [Yahoo](https://www.yahoo.com/)、[DMOZ](http://www.dmoz.org/) 等知名门户网站作为种子链接来进行扩散 。
+假设我们有一个初始列表 `links_to_crawl`(待抓取链接),它最初基于网站整体的知名度来排序。当然如果这个假设不合理,我们可以使用 [Yahoo](https://www.yahoo.com/) 、[DMOZ](http://www.dmoz.org/) 等知名门户网站作为种子链接来进行扩散 。
我们将用表 `crawled_links` (已抓取链接 )来记录已经处理过的链接以及相应的页面签名。
-我们可以将 `links_to_crawl` 和 `crawled_links` 记录在键-值型 **NoSQL 数据库**中。对于 `crawled_links` 中已排序的链接,我们可以使用 [Redis](https://redis.io/) 的有序集合来维护网页链接的排名。我们应当在 [选择 SQL 还是 NoSQL 的问题上,讨论有关使用场景以及利弊 ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
+我们可以将 `links_to_crawl` 和 `crawled_links` 记录在键-值型 **NoSQL 数据库**中。对于 `crawled_links` 中已排序的链接,我们可以使用 [Redis](https://redis.io/) 的有序集合来维护网页链接的排名。我们应当在 [选择 SQL 还是 NoSQL 的问题上,讨论有关使用场景以及利弊 ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql) 。
* **爬虫服务**按照以下流程循环处理每一个页面链接:
* 选取排名最靠前的待抓取链接
@@ -88,7 +88,7 @@
* 这样做可以避免陷入死循环
* 继续(进入下一次循环)
* 若不存在,则抓取该链接
- * 在**倒排索引服务**任务队列中,新增一个生成[倒排索引](https://en.wikipedia.org/wiki/Search_engine_indexing)任务。
+ * 在**倒排索引服务**任务队列中,新增一个生成[倒排索引](https://en.wikipedia.org/wiki/Search_engine_indexing) 任务。
* 在**文档服务**任务队列中,新增一个生成静态标题和摘要的任务。
* 生成页面签名
* 在 **NoSQL 数据库**的 `links_to_crawl` 中删除该链接
@@ -99,33 +99,33 @@
`PagesDataStore` 是**爬虫服务**中的一个抽象类,它使用 **NoSQL 数据库**进行存储。
```python
-class PagesDataStore(object):
+class PagesDataStore(object) :
- def __init__(self, db);
+ def __init__(self, db) ;
self.db = db
...
- def add_link_to_crawl(self, url):
+ def add_link_to_crawl(self, url) :
"""将指定链接加入 `links_to_crawl`。"""
...
- def remove_link_to_crawl(self, url):
+ def remove_link_to_crawl(self, url) :
"""从 `links_to_crawl` 中删除指定链接。"""
...
- def reduce_priority_link_to_crawl(self, url)
+ def reduce_priority_link_to_crawl(self, url)
"""在 `links_to_crawl` 中降低一个链接的优先级以避免死循环。"""
...
- def extract_max_priority_page(self):
+ def extract_max_priority_page(self) :
"""返回 `links_to_crawl` 中优先级最高的链接。"""
...
- def insert_crawled_link(self, url, signature):
+ def insert_crawled_link(self, url, signature) :
"""将指定链接加入 `crawled_links`。"""
...
- def crawled_similar(self, signature):
+ def crawled_similar(self, signature) :
"""判断待抓取页面的签名是否与某个已抓取页面的签名相似。"""
...
```
@@ -133,9 +133,9 @@ class PagesDataStore(object):
`Page` 是**爬虫服务**的一个抽象类,它封装了网页对象,由页面链接、页面内容、子链接和页面签名构成。
```python
-class Page(object):
+class Page(object) :
- def __init__(self, url, contents, child_urls, signature):
+ def __init__(self, url, contents, child_urls, signature) :
self.url = url
self.contents = contents
self.child_urls = child_urls
@@ -145,33 +145,33 @@ class Page(object):
`Crawler` 是**爬虫服务**的主类,由`Page` 和 `PagesDataStore` 组成。
```python
-class Crawler(object):
+class Crawler(object) :
- def __init__(self, data_store, reverse_index_queue, doc_index_queue):
+ def __init__(self, data_store, reverse_index_queue, doc_index_queue) :
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
- def create_signature(self, page):
+ def create_signature(self, page) :
"""基于页面链接与内容生成签名。"""
...
- def crawl_page(self, page):
+ def crawl_page(self, page) :
for url in page.child_urls:
- self.data_store.add_link_to_crawl(url)
- page.signature = self.create_signature(page)
- self.data_store.remove_link_to_crawl(page.url)
- self.data_store.insert_crawled_link(page.url, page.signature)
+ self.data_store.add_link_to_crawl(url)
+ page.signature = self.create_signature(page)
+ self.data_store.remove_link_to_crawl(page.url)
+ self.data_store.insert_crawled_link(page.url, page.signature)
- def crawl(self):
+ def crawl(self) :
while True:
- page = self.data_store.extract_max_priority_page()
+ page = self.data_store.extract_max_priority_page()
if page is None:
break
- if self.data_store.crawled_similar(page.signature):
- self.data_store.reduce_priority_link_to_crawl(page.url)
+ if self.data_store.crawled_similar(page.signature) :
+ self.data_store.reduce_priority_link_to_crawl(page.url)
else:
- self.crawl_page(page)
+ self.crawl_page(page)
```
### 处理重复内容
@@ -186,18 +186,18 @@ class Crawler(object):
* 假设有 10 亿条数据,我们应该使用 **MapReduce** 来输出只出现 1 次的记录。
```python
-class RemoveDuplicateUrls(MRJob):
+class RemoveDuplicateUrls(MRJob) :
- def mapper(self, _, line):
+ def mapper(self, _, line) :
yield line, 1
- def reducer(self, key, values):
- total = sum(values)
+ def reducer(self, key, values) :
+ total = sum(values)
if total == 1:
yield key, total
```
-比起处理重复内容,检测重复内容更为复杂。我们可以基于网页内容生成签名,然后对比两者签名的相似度。可能会用到的算法有 [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) 以及 [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)。
+比起处理重复内容,检测重复内容更为复杂。我们可以基于网页内容生成签名,然后对比两者签名的相似度。可能会用到的算法有 [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) 以及 [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) 。
### 抓取结果更新策略
@@ -209,7 +209,7 @@ class RemoveDuplicateUrls(MRJob):
### 用例:用户输入搜索词后,可以看到相关的搜索结果列表,列表每一项都包含由网页爬虫生成的页面标题及摘要
-* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
+* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) 的 **Web 服务器**发送一个请求
* **Web 服务器** 发送请求到 **Query API** 服务器
* **查询 API** 服务将会做这些事情:
* 解析查询参数
@@ -248,14 +248,14 @@ $ curl https://search.com/api/v1/search?query=hello+world
},
```
-对于服务器内部通信,我们可以使用 [远程过程调用协议(RPC)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+对于服务器内部通信,我们可以使用 [远程过程调用协议(RPC)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
-
+
**重要提示:不要直接从最初设计跳到最终设计!**
@@ -265,17 +265,17 @@ $ curl https://search.com/api/v1/search?query=hello+world
我们将会介绍一些组件来完成设计,并解决架构规模扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
-**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及替代方案。
+**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及替代方案。
-* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
-* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
-* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
-* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
-* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
-* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
-* [NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#nosql)
-* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
-* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
+* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
+* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
+* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
+* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
+* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
+* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
+* [NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#nosql)
+* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
+* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
有些搜索词非常热门,有些则非常冷门。热门的搜索词可以通过诸如 Redis 或者 Memcached 之类的**内存缓存**来缩短响应时间,避免**倒排索引服务**以及**文档服务**过载。**内存缓存**同样适用于流量分布不均匀以及流量短时高峰问题。从内存中读取 1 MB 连续数据大约需要 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。1
@@ -284,7 +284,7 @@ $ curl https://search.com/api/v1/search?query=hello+world
* 为了处理数据大小问题以及网络请求负载,**倒排索引服务**和**文档服务**可能需要大量应用数据分片和数据复制。
* DNS 查询可能会成为瓶颈,**爬虫服务**最好专门维护一套定期更新的 DNS 查询服务。
-* 借助于[连接池](https://en.wikipedia.org/wiki/Connection_pool),即同时维持多个开放网络连接,可以提升**爬虫服务**的性能并减少内存使用量。
+* 借助于[连接池](https://en.wikipedia.org/wiki/Connection_pool) ,即同时维持多个开放网络连接,可以提升**爬虫服务**的性能并减少内存使用量。
* 改用 [UDP](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#用户数据报协议udp) 协议同样可以提升性能
* 网络爬虫受带宽影响较大,请确保带宽足够维持高吞吐量。
@@ -294,61 +294,61 @@ $ curl https://search.com/api/v1/search?query=hello+world
### SQL 扩展模式
-* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
-* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
-* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
-* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
-* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
+* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
+* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
+* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
+* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
+* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
#### NoSQL
-* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
-* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
-* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
-* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
+* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
+* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
+* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
+* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
- * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
- * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
- * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
- * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
- * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
+ * [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
+ * [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
+ * [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
+ * [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
+ * [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
- * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
- * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
+ * [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
+ * [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
- * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
- * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
- * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
- * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
+ * [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
+ * [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
+ * [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
+ * [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
-* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
-* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
-* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
-* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
+* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
+* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
+* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
+* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
- * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
- * 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
-* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
+ * 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
+ * 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
+* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
-请参阅[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)。
+请参阅[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 。
### 延迟数值
-请参阅[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
+请参阅[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数) 。
### 持续探讨
diff --git a/solutions/system_design/web_crawler/README.md b/solutions/system_design/web_crawler/README.md
index e6e79ad2..f3acfe34 100644
--- a/solutions/system_design/web_crawler/README.md
+++ b/solutions/system_design/web_crawler/README.md
@@ -46,7 +46,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* For simplicity, count changes the same as new pages
* 100 billion searches per month
-Exercise the use of more traditional systems - don't use existing systems such as [solr](http://lucene.apache.org/solr/) or [nutch](http://nutch.apache.org/).
+Exercise the use of more traditional systems - don't use existing systems such as [solr](http://lucene.apache.org/solr/) or [nutch](http://nutch.apache.org/) .
#### Calculate usage
@@ -69,7 +69,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
-
+
## Step 3: Design core components
@@ -77,11 +77,11 @@ Handy conversion guide:
### Use case: Service crawls a list of urls
-We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc.
+We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/) , [DMOZ](http://www.dmoz.org/) , etc.
We'll use a table `crawled_links` to store processed links and their page signatures.
-We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Database**. For the ranked links in `links_to_crawl`, we could use [Redis](https://redis.io/) with sorted sets to maintain a ranking of page links. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
+We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Database**. For the ranked links in `links_to_crawl`, we could use [Redis](https://redis.io/) with sorted sets to maintain a ranking of page links. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
* The **Crawler Service** processes each page link by doing the following in a loop:
* Takes the top ranked page link to crawl
@@ -90,7 +90,7 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab
* This prevents us from getting into a cycle
* Continue
* Else, crawls the link
- * Adds a job to the **Reverse Index Service** queue to generate a [reverse index](https://en.wikipedia.org/wiki/Search_engine_indexing)
+ * Adds a job to the **Reverse Index Service** queue to generate a [reverse index](https://en.wikipedia.org/wiki/Search_engine_indexing)
* Adds a job to the **Document Service** queue to generate a static title and snippet
* Generates the page signature
* Removes the link from `links_to_crawl` in the **NoSQL Database**
@@ -101,33 +101,33 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab
`PagesDataStore` is an abstraction within the **Crawler Service** that uses the **NoSQL Database**:
```python
-class PagesDataStore(object):
+class PagesDataStore(object) :
- def __init__(self, db);
+ def __init__(self, db) ;
self.db = db
...
- def add_link_to_crawl(self, url):
+ def add_link_to_crawl(self, url) :
"""Add the given link to `links_to_crawl`."""
...
- def remove_link_to_crawl(self, url):
+ def remove_link_to_crawl(self, url) :
"""Remove the given link from `links_to_crawl`."""
...
- def reduce_priority_link_to_crawl(self, url)
+ def reduce_priority_link_to_crawl(self, url)
"""Reduce the priority of a link in `links_to_crawl` to avoid cycles."""
...
- def extract_max_priority_page(self):
+ def extract_max_priority_page(self) :
"""Return the highest priority link in `links_to_crawl`."""
...
- def insert_crawled_link(self, url, signature):
+ def insert_crawled_link(self, url, signature) :
"""Add the given link to `crawled_links`."""
...
- def crawled_similar(self, signature):
+ def crawled_similar(self, signature) :
"""Determine if we've already crawled a page matching the given signature"""
...
```
@@ -135,9 +135,9 @@ class PagesDataStore(object):
`Page` is an abstraction within the **Crawler Service** that encapsulates a page, its contents, child urls, and signature:
```python
-class Page(object):
+class Page(object) :
- def __init__(self, url, contents, child_urls, signature):
+ def __init__(self, url, contents, child_urls, signature) :
self.url = url
self.contents = contents
self.child_urls = child_urls
@@ -147,33 +147,33 @@ class Page(object):
`Crawler` is the main class within **Crawler Service**, composed of `Page` and `PagesDataStore`.
```python
-class Crawler(object):
+class Crawler(object) :
- def __init__(self, data_store, reverse_index_queue, doc_index_queue):
+ def __init__(self, data_store, reverse_index_queue, doc_index_queue) :
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
- def create_signature(self, page):
+ def create_signature(self, page) :
"""Create signature based on url and contents."""
...
- def crawl_page(self, page):
+ def crawl_page(self, page) :
for url in page.child_urls:
- self.data_store.add_link_to_crawl(url)
- page.signature = self.create_signature(page)
- self.data_store.remove_link_to_crawl(page.url)
- self.data_store.insert_crawled_link(page.url, page.signature)
+ self.data_store.add_link_to_crawl(url)
+ page.signature = self.create_signature(page)
+ self.data_store.remove_link_to_crawl(page.url)
+ self.data_store.insert_crawled_link(page.url, page.signature)
- def crawl(self):
+ def crawl(self) :
while True:
- page = self.data_store.extract_max_priority_page()
+ page = self.data_store.extract_max_priority_page()
if page is None:
break
- if self.data_store.crawled_similar(page.signature):
- self.data_store.reduce_priority_link_to_crawl(page.url)
+ if self.data_store.crawled_similar(page.signature) :
+ self.data_store.reduce_priority_link_to_crawl(page.url)
else:
- self.crawl_page(page)
+ self.crawl_page(page)
```
### Handling duplicates
@@ -188,18 +188,18 @@ We'll want to remove duplicate urls:
* With 1 billion links to crawl, we could use **MapReduce** to output only entries that have a frequency of 1
```python
-class RemoveDuplicateUrls(MRJob):
+class RemoveDuplicateUrls(MRJob) :
- def mapper(self, _, line):
+ def mapper(self, _, line) :
yield line, 1
- def reducer(self, key, values):
- total = sum(values)
+ def reducer(self, key, values) :
+ total = sum(values)
if total == 1:
yield key, total
```
-Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) and [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
+Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) and [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) .
### Determining when to update the crawl results
@@ -211,7 +211,7 @@ We might also choose to support a `Robots.txt` file that gives webmasters contro
### Use case: User inputs a search term and sees a list of relevant pages with titles and snippets
-* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Query API** server
* The **Query API** server does the following:
* Parses the query
@@ -224,7 +224,7 @@ We might also choose to support a `Robots.txt` file that gives webmasters contro
* The **Reverse Index Service** ranks the matching results and returns the top ones
* Uses the **Document Service** to return titles and snippets
-We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
+We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
```
$ curl https://search.com/api/v1/search?query=hello+world
@@ -250,13 +250,13 @@ Response:
},
```
-For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
+For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
-
+
**Important: Do not simply jump right into the final design from the initial design!**
@@ -268,15 +268,15 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
-* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
-* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
-* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
-* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
-* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
-* [Cache](https://github.com/donnemartin/system-design-primer#cache)
-* [NoSQL](https://github.com/donnemartin/system-design-primer#nosql)
-* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
-* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
+* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
+* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
+* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
+* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
+* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
+* [Cache](https://github.com/donnemartin/system-design-primer#cache)
+* [NoSQL](https://github.com/donnemartin/system-design-primer#nosql)
+* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
+* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
Some searches are very popular, while others are only executed once. Popular queries can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to avoid overloading the **Reverse Index Service** and **Document Service**. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
@@ -284,7 +284,7 @@ Below are a few other optimizations to the **Crawling Service**:
* To handle the data size and request load, the **Reverse Index Service** and **Document Service** will likely need to make heavy use sharding and federation.
* DNS lookup can be a bottleneck, the **Crawler Service** can keep its own DNS lookup that is refreshed periodically
-* The **Crawler Service** can improve performance and reduce memory usage by keeping many open connections at a time, referred to as [connection pooling](https://en.wikipedia.org/wiki/Connection_pool)
+* The **Crawler Service** can improve performance and reduce memory usage by keeping many open connections at a time, referred to as [connection pooling](https://en.wikipedia.org/wiki/Connection_pool)
* Switching to [UDP](https://github.com/donnemartin/system-design-primer#user-datagram-protocol-udp) could also boost performance
* Web crawling is bandwidth intensive, ensure there is enough bandwidth to sustain high throughput
@@ -294,58 +294,58 @@ Below are a few other optimizations to the **Crawling Service**:
### SQL scaling patterns
-* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
-* [Federation](https://github.com/donnemartin/system-design-primer#federation)
-* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
-* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
-* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
+* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
+* [Federation](https://github.com/donnemartin/system-design-primer#federation)
+* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
+* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
+* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
-* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
-* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
-* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
-* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
-* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
+* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
+* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
+* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
+* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
+* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
- * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
- * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
- * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
- * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
- * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
+ * [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
+ * [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
+ * [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
+ * [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
+ * [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
- * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
- * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
+ * [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
+ * [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
- * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
- * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
- * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
+ * [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
+ * [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
+ * [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
+ * [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
-* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
-* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
-* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
-* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
+* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
+* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
+* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
+* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
- * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
- * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
-* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
+ * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
+ * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
+* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
-Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
+Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
### Latency numbers
-See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
+See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
### Ongoing
diff --git a/solutions/system_design/web_crawler/web_crawler_mapreduce.py b/solutions/system_design/web_crawler/web_crawler_mapreduce.py
index 8a5f8087..1611ebbc 100644
--- a/solutions/system_design/web_crawler/web_crawler_mapreduce.py
+++ b/solutions/system_design/web_crawler/web_crawler_mapreduce.py
@@ -3,23 +3,23 @@
from mrjob.job import MRJob
-class RemoveDuplicateUrls(MRJob):
+class RemoveDuplicateUrls(MRJob) :
- def mapper(self, _, line):
+ def mapper(self, _, line) :
yield line, 1
- def reducer(self, key, values):
- total = sum(values)
+ def reducer(self, key, values) :
+ total = sum(values)
if total == 1:
yield key, total
- def steps(self):
+ def steps(self) :
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
- reducer=self.reducer)
+ reducer=self.reducer)
]
if __name__ == '__main__':
- RemoveDuplicateUrls.run()
+ RemoveDuplicateUrls.run()
diff --git a/solutions/system_design/web_crawler/web_crawler_snippets.py b/solutions/system_design/web_crawler/web_crawler_snippets.py
index d84a2536..50ff648c 100644
--- a/solutions/system_design/web_crawler/web_crawler_snippets.py
+++ b/solutions/system_design/web_crawler/web_crawler_snippets.py
@@ -1,73 +1,73 @@
# -*- coding: utf-8 -*-
-class PagesDataStore(object):
+class PagesDataStore(object) :
- def __init__(self, db):
+ def __init__(self, db) :
self.db = db
pass
- def add_link_to_crawl(self, url):
+ def add_link_to_crawl(self, url) :
"""Add the given link to `links_to_crawl`."""
pass
- def remove_link_to_crawl(self, url):
+ def remove_link_to_crawl(self, url) :
"""Remove the given link from `links_to_crawl`."""
pass
- def reduce_priority_link_to_crawl(self, url):
+ def reduce_priority_link_to_crawl(self, url) :
"""Reduce the priority of a link in `links_to_crawl` to avoid cycles."""
pass
- def extract_max_priority_page(self):
+ def extract_max_priority_page(self) :
"""Return the highest priority link in `links_to_crawl`."""
pass
- def insert_crawled_link(self, url, signature):
+ def insert_crawled_link(self, url, signature) :
"""Add the given link to `crawled_links`."""
pass
- def crawled_similar(self, signature):
+ def crawled_similar(self, signature) :
"""Determine if we've already crawled a page matching the given signature"""
pass
-class Page(object):
+class Page(object) :
- def __init__(self, url, contents, child_urls):
+ def __init__(self, url, contents, child_urls) :
self.url = url
self.contents = contents
self.child_urls = child_urls
- self.signature = self.create_signature()
+ self.signature = self.create_signature()
- def create_signature(self):
+ def create_signature(self) :
# Create signature based on url and contents
pass
-class Crawler(object):
+class Crawler(object) :
- def __init__(self, pages, data_store, reverse_index_queue, doc_index_queue):
+ def __init__(self, pages, data_store, reverse_index_queue, doc_index_queue) :
self.pages = pages
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
- def crawl_page(self, page):
+ def crawl_page(self, page) :
for url in page.child_urls:
- self.data_store.add_link_to_crawl(url)
- self.reverse_index_queue.generate(page)
- self.doc_index_queue.generate(page)
- self.data_store.remove_link_to_crawl(page.url)
- self.data_store.insert_crawled_link(page.url, page.signature)
+ self.data_store.add_link_to_crawl(url)
+ self.reverse_index_queue.generate(page)
+ self.doc_index_queue.generate(page)
+ self.data_store.remove_link_to_crawl(page.url)
+ self.data_store.insert_crawled_link(page.url, page.signature)
- def crawl(self):
+ def crawl(self) :
while True:
- page = self.data_store.extract_max_priority_page()
+ page = self.data_store.extract_max_priority_page()
if page is None:
break
- if self.data_store.crawled_similar(page.signature):
- self.data_store.reduce_priority_link_to_crawl(page.url)
+ if self.data_store.crawled_similar(page.signature) :
+ self.data_store.reduce_priority_link_to_crawl(page.url)
else:
- self.crawl_page(page)
- page = self.data_store.extract_max_priority_page()
+ self.crawl_page(page)
+ page = self.data_store.extract_max_priority_page()
From 476dd1c5d99884000e2c3a7d243e8dace2fcf117 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 14 Mar 2021 17:26:49 +0700
Subject: [PATCH 02/11] fixed issue preview
---
resources/noat.cards/Application layer.md | 6 +++---
resources/noat.cards/Asynchronism.md | 15 ++++++---------
resources/noat.cards/Availability patterns.md | 18 +++++++++---------
3 files changed, 18 insertions(+), 21 deletions(-)
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index ac1fcf4d..188645b8 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -18,17 +18,17 @@ The single responsibility principle advocates for small and autonomous service
Workers in the application layer also help enable [asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism) .
-### [](https://github.com/donnemartin/system-design-primer#microservices) Microservices
+### Microservices
Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-definied, lightweight mechanism to serve a business goal. [1](https://smartbear.com/learn/api-design/what-are-microservices)
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
-### [](https://github.com/donnemartin/system-design-primer#service-discovery) Service Discovery
+### Service Discovery
Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, ports, etc.
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-application-layer) Disadvantage(s) : application layer
+### Disadvantage(s) : application layer
* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
* Microservices can add complexity in terms of deployments and operations.
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
index 946768d5..bfdae42c 100644
--- a/resources/noat.cards/Asynchronism.md
+++ b/resources/noat.cards/Asynchronism.md
@@ -10,7 +10,7 @@ _[Source: Intro to architecting systems for scale](http://lethain.com/introducti
Asynchronous workflows help reduce request times for expensive operations that would otherwise be performed in-line. They can also help by doing time-consuming work in advance, such as periodic aggregation of data.
-### [](https://github.com/donnemartin/system-design-primer#message-queues) Message queues
+### Message queues
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
@@ -25,26 +25,23 @@ RabbitMQ is popular but requires you to adapt to the 'AMQP' protocol and manage
Amazon SQS, is hosted but can have high latency and has the possibility of messages being delivered twice.
-### [](https://github.com/donnemartin/system-design-primer#task-queues) Task queues
+### Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
Celery has support for scheduling and primarily has python support.
-### [](https://github.com/donnemartin/system-design-primer#back-pressure) Back pressure
+### Back pressure
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-asynchronism) Disadvantage(s) : asynchronism
+### Disadvantage(s) : asynchronism
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
-### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-11) Source(s) and further reading
+### Source(s) and further reading
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
-* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
-
-[](https://github.com/donnemartin/system-design-primer#communication)
----------------------------------------------------------------------
\ No newline at end of file
+* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
\ No newline at end of file
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 3cce966d..5f79527a 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -7,7 +7,7 @@ isdraft = False
There are two main patterns to support high availability:fail-over and replication.
-### [](https://github.com/donnemartin/system-design-primer#active-passive) Active-passive (Fail-Over)
+### Active-passive (Fail-Over)
With active-passive fail-over, heartbeats are sent between the active and the passive server on standby. If the heartbeat is interrupted, the passive server takes over the active's IP address and resumes service.
@@ -15,7 +15,7 @@ The length of downtime is determined by whether the passive server is already ru
Active-passive failover can also be referred to as master-slave failover.
-### [](https://github.com/donnemartin/system-design-primer#active-active) Active-active (Fail-Over)
+### Active-active (Fail-Over)
In active-active, both servers are managing traffic, spreading the load between them.
@@ -23,39 +23,39 @@ If the servers are public-facing, the DNS would need to know about the public IP
Active-active failover can also be referred to as master-master failover.
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-failover) Disadvantage(s) : failover
+### Disadvantage(s) : failover
* Fail-over adds more hardware and additional complexity.
* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
-### [](https://github.com/donnemartin/system-design-primer#master-slave-and-master-master) Master-slave replication
+### Master-slave replication
The master serves reads and writes, replicating writes to one or more slaves, which serve only reads. Slaves can also replicate to additional slaves in a tree-like fashion. If the master goes offline, the system can continue to operate in read-only mode until a slave is promoted to a master or a new master is provisioned.
[ ](https://camo.githubusercontent.com/6a097809b9690236258747d969b1d3e0d93bb8ca/687474703a2f2f692e696d6775722e636f6d2f4339696f47746e2e706e67)
_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-master-slave-replication) Disadvantage(s) : master-slave replication
+### Disadvantage(s) : master-slave replication
* Additional logic is needed to promote a slave to a master.
* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
-### [](https://github.com/donnemartin/system-design-primer#master-master-replication) Master-master replication
+### Master-master replication
Both masters serve reads and writes and coordinate with each other on writes. If either master goes down, the system can continue to operate with both reads and writes.
[ ](https://camo.githubusercontent.com/5862604b102ee97d85f86f89edda44bde85a5b7f/687474703a2f2f692e696d6775722e636f6d2f6b7241484c47672e706e67)
_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-master-master-replication) Disadvantage(s) : master-master replication
+### Disadvantage(s) : master-master replication
* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
* Conflict resolution comes more into play as more write nodes are added and as latency increases.
* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
-### [](https://github.com/donnemartin/system-design-primer#disadvantages-replication) Disadvantage(s) : replication
+### Disadvantage(s) : replication
* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
@@ -63,7 +63,7 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
* On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
* Replication adds more hardware and additional complexity.
-### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-replication) Source(s) and further reading: replication
+### Source(s) and further reading: replication
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
\ No newline at end of file
From 5de8751070bfa05929943af15b0915cf5f8eba66 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 14 Mar 2021 22:13:28 +0700
Subject: [PATCH 03/11] update more cards
---
resources/noat.cards/Application layer.md | 17 ++++---
resources/noat.cards/Asynchronism.md | 14 +++---
resources/noat.cards/Availability patterns.md | 30 ++++++-------
.../noat.cards/Availability vs consistency.md | 35 +++++++++++++++
resources/noat.cards/Base 62.md | 13 ++++++
resources/noat.cards/Cache locations.md | 42 ++++++++++++++++++
resources/noat.cards/Cache-aside.md | 37 ++++++++++++++++
resources/noat.cards/Cache.md | 31 +++++++++++++
resources/noat.cards/Communication.md | 5 +++
resources/noat.cards/Consistency patterns.md | 32 ++++++++++++++
.../noat.cards/Content delivery network.md | 44 +++++++++++++++++++
.../Database caching, what to cache.md | 38 ++++++++++++++++
resources/noat.cards/Database.md | 23 ++++++++++
13 files changed, 330 insertions(+), 31 deletions(-)
create mode 100644 resources/noat.cards/Availability vs consistency.md
create mode 100644 resources/noat.cards/Base 62.md
create mode 100644 resources/noat.cards/Cache locations.md
create mode 100644 resources/noat.cards/Cache-aside.md
create mode 100644 resources/noat.cards/Cache.md
create mode 100644 resources/noat.cards/Communication.md
create mode 100644 resources/noat.cards/Consistency patterns.md
create mode 100644 resources/noat.cards/Content delivery network.md
create mode 100644 resources/noat.cards/Database caching, what to cache.md
create mode 100644 resources/noat.cards/Database.md
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index 188645b8..f24e4f3e 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -3,8 +3,7 @@ noatcards = True
isdraft = False
+++
-Application layer
------------------
+# Application layer
### Application layer - Introduction
@@ -30,13 +29,13 @@ Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-
### Disadvantage(s) : application layer
-* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
-* Microservices can add complexity in terms of deployments and operations.
+- Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
+- Microservices can add complexity in terms of deployments and operations.
### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-9) Source(s) and further reading
-* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
-* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
-* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
-* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
\ No newline at end of file
+- [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
+- [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
+- [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
+- [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
+- [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
\ No newline at end of file
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
index bfdae42c..64b44e47 100644
--- a/resources/noat.cards/Asynchronism.md
+++ b/resources/noat.cards/Asynchronism.md
@@ -14,8 +14,8 @@ Asynchronous workflows help reduce request times for expensive operations that w
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
-* An application publishes a job to the queue, then notifies the user of job status
-* A worker picks up the job from the queue, processes it, then signals the job is complete
+- An application publishes a job to the queue, then notifies the user of job status
+- A worker picks up the job from the queue, processes it, then signals the job is complete
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
@@ -37,11 +37,11 @@ If queues start to grow significantly, the queue size can become larger than mem
### Disadvantage(s) : asynchronism
-* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
+- Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
### Source(s) and further reading
-* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
-* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
-* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
-* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
\ No newline at end of file
+- [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
+- [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
+- [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
+- [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
\ No newline at end of file
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 5f79527a..29f46323 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -25,8 +25,8 @@ Active-active failover can also be referred to as master-master failover.
### Disadvantage(s) : failover
-* Fail-over adds more hardware and additional complexity.
-* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
+- Fail-over adds more hardware and additional complexity.
+- There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
### Master-slave replication
@@ -38,8 +38,8 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
### Disadvantage(s) : master-slave replication
-* Additional logic is needed to promote a slave to a master.
-* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+- Additional logic is needed to promote a slave to a master.
+- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
### Master-master replication
@@ -50,20 +50,20 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
### Disadvantage(s) : master-master replication
-* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
-* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
-* Conflict resolution comes more into play as more write nodes are added and as latency increases.
-* See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+- You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
+- Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
+- Conflict resolution comes more into play as more write nodes are added and as latency increases.
+- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
### Disadvantage(s) : replication
-* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
-* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
-* The more read slaves, the more you have to replicate, which leads to greater replication lag.
-* On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
-* Replication adds more hardware and additional complexity.
+- There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
+- Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
+- The more read slaves, the more you have to replicate, which leads to greater replication lag.
+- On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
+- Replication adds more hardware and additional complexity.
### Source(s) and further reading: replication
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
-* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
\ No newline at end of file
+- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+- [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
\ No newline at end of file
diff --git a/resources/noat.cards/Availability vs consistency.md b/resources/noat.cards/Availability vs consistency.md
new file mode 100644
index 00000000..bea7397f
--- /dev/null
+++ b/resources/noat.cards/Availability vs consistency.md
@@ -0,0 +1,35 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Availability vs consistency
+
+### CAP theorem
+
+[ ](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67)
+_[Source: CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited) _
+
+In a distributed computer system, you can only support two of the following guarantees:
+
+- Consistency - Every read receives the most recent write or an error
+- Availability - Every request receives a response, without guarantee that it contains the most recent version of the information
+- Partition Tolerance - The system continues to operate despite arbitrary partitioning due to network failures
+
+_Networks aren't reliable, so you'll need to support partition tolerance. You'll need to make a software tradeoff between consistency and availability._
+
+#### CP - consistency and partition tolerance
+
+Waiting for a response from the partitioned node might result in a timeout error. CP is a good choice if your business needs require atomic reads and writes.
+
+#### AP - availability and partition tolerance
+
+Responses return the most recent version of the data, which might not be the latest. Writes might take some time to propagate when the partition is resolved.
+
+AP is a good choice if the business needs allow for [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) or when the system needs to continue working despite external errors.
+
+### Source(s) and further reading
+
+- [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
+- [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
+- [CAP FAQ](https://github.com/henryr/cap-faq)
\ No newline at end of file
diff --git a/resources/noat.cards/Base 62.md b/resources/noat.cards/Base 62.md
new file mode 100644
index 00000000..b06d1907
--- /dev/null
+++ b/resources/noat.cards/Base 62.md
@@ -0,0 +1,13 @@
++++
+noatcards = True
+isdraft = False
++++
+
+
+# Base 62
+---
+
+## Introduction of base 62
+- Encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
+- Only one hash result for the original input and and the operation is deterministic (no randomness involved)
+- Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
\ No newline at end of file
diff --git a/resources/noat.cards/Cache locations.md b/resources/noat.cards/Cache locations.md
new file mode 100644
index 00000000..c2da1ef2
--- /dev/null
+++ b/resources/noat.cards/Cache locations.md
@@ -0,0 +1,42 @@
++++
+noatcards = True
+isdraft = False
++++
+
+
+# Cache locations
+
+
+### Client caching
+
+Caches can be located on the client side (OS or browser) , [server side](https://github.com/donnemartin/system-design-primer#reverse-proxy) , or in a distinct cache layer.
+
+### CDN caching
+
+[CDNs](https://github.com/donnemartin/system-design-primer#content-delivery-network) are considered a type of cache.
+
+### Web server caching
+
+[Reverse proxies](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
+
+### Database caching
+
+Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
+
+### Application caching
+
+In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
+
+Redis has the following additional features:
+
+- Persistence option
+- Built-in data structures such as sorted sets and lists
+
+There are multiple levels you can cache that fall into two general categories: database queries and objects:
+
+- Row level
+- Query-level
+- Fully-formed serializable objects
+- Fully-rendered HTML
+
+Generaly, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
\ No newline at end of file
diff --git a/resources/noat.cards/Cache-aside.md b/resources/noat.cards/Cache-aside.md
new file mode 100644
index 00000000..240e747c
--- /dev/null
+++ b/resources/noat.cards/Cache-aside.md
@@ -0,0 +1,37 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Cache-aside
+
+## Introduction
+
+[ ](https://camo.githubusercontent.com/7f5934e49a678b67f65e5ed53134bc258b007ebb/687474703a2f2f692e696d6775722e636f6d2f4f4e6a4f52716b2e706e67)
+_[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast) _
+
+The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
+
+- Look for entry in cache, resulting in a cache miss
+- Load entry from the database
+- Add entry to cache
+- Return entry
+```python
+def get_user(self, user_id) :
+ user = cache.get("user.{0}", user_id)
+ if user is None:
+ user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
+ if user is not None:
+ cache.set(key, json.dumps(user))
+ return user
+```
+
+[Memcached](https://memcached.org/) is generally used in this manner.
+
+Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
+
+## Disadvantage(s) : cache-aside
+
+- Each cache miss results in three trips, which can cause a noticeable delay.
+- Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
+- When a node fails, it is replaced by a new, empty node, increasing latency.
\ No newline at end of file
diff --git a/resources/noat.cards/Cache.md b/resources/noat.cards/Cache.md
new file mode 100644
index 00000000..c013729b
--- /dev/null
+++ b/resources/noat.cards/Cache.md
@@ -0,0 +1,31 @@
++++
+noatcards = True
+isdraft = False
++++
+
+
+# Cache
+
+### Cache - Introduction
+[ ](https://camo.githubusercontent.com/7acedde6aa7853baf2eb4a53f88e2595ebe43756/687474703a2f2f692e696d6775722e636f6d2f51367a32344c612e706e67)
+_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html) _
+
+Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
+
+Databases often benefit from a uniform distribution of reads and writes across its partitions. Popular items can skew the distribution, causing bottlenecks. Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
+
+### Disadvantage(s) : cache
+
+- Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
+- Need to make application changes such as adding Redis or memcached.
+- Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
+
+### Source(s) and further reading
+
+- [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
+- [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
+- [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
+- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+- [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
+- [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
+- [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
\ No newline at end of file
diff --git a/resources/noat.cards/Communication.md b/resources/noat.cards/Communication.md
new file mode 100644
index 00000000..e8e434a7
--- /dev/null
+++ b/resources/noat.cards/Communication.md
@@ -0,0 +1,5 @@
+Communication
+-------------
+---
+[ ](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067)
+_[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html) _
\ No newline at end of file
diff --git a/resources/noat.cards/Consistency patterns.md b/resources/noat.cards/Consistency patterns.md
new file mode 100644
index 00000000..0b732532
--- /dev/null
+++ b/resources/noat.cards/Consistency patterns.md
@@ -0,0 +1,32 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Consistency patterns
+
+## Introduction
+
+With multiple copies of the same data, we are faced with options on how to synchronize them so clients have a consistent view of the data. Recall the definition of consistency from the [CAP theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) - Every read receives the most recent write or an error.
+
+### Weak consistency
+
+After a write, reads may or may not see it. A best effort approach is taken.
+
+This approach is seen in systems such as memcached. Weak consistency works well in real time use cases such as VoIP, video chat, and realtime multiplayer games. For example, if you are on a phone call and lose reception for a few seconds, when you regain connection you do not hear what was spoken during connection loss.
+
+### Eventual consistency
+
+After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
+
+This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
+
+### Strong consistency
+
+After a write, reads will see it. Data is replicated synchronously.
+
+This approach is seen in file systems and RDBMSes. Strong consistency works well in systems that need transactions.
+
+### Source(s) and further reading
+
+- [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Content delivery network.md b/resources/noat.cards/Content delivery network.md
new file mode 100644
index 00000000..8dc05c6b
--- /dev/null
+++ b/resources/noat.cards/Content delivery network.md
@@ -0,0 +1,44 @@
++++
+noatcards = True
+isdraft = False
++++
+
+
+# Content delivery network
+
+
+[ ](https://camo.githubusercontent.com/853a8603651149c686bf3c504769fc594ff08849/687474703a2f2f692e696d6775722e636f6d2f683954417547492e6a7067)
+_[Source: Why use a CDN](https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/) _
+
+A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JSS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact.
+
+Serving content from CDNs can significantly improve performance in two ways:
+
+- Users receive content at data centers close to them
+- Your servers do not have to serve requests that the CDN fulfills
+
+### Push CDNs
+
+Push CDNs receive new content whenever changes occur on your server. You take full responsibility for providing content, uploading directly to the CDN and rewriting URLs to point to the CDN. You can configure when content expires and when it is updated. Content is uploaded only when it is new or changed, minimizing traffic, but maximizing storage.
+
+Sites with a small amount of traffic or sites with content that isn't often updated work well with push CDNs. Content is placed on the CDNs once, instead of being re-pulled at regular intervals.
+
+### Pull CDNs
+
+Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the server.
+
+[time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
+
+Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
+
+### Disadvantage(s) : CDN
+
+- CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
+- Content might be stale if it is updated before the TTL expires it.
+- CDNs require changing URLs for static content to point to the CDN.
+
+### Source(s) and further reading
+
+- [Globally distributed content delivery](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
+- [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
+- [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
\ No newline at end of file
diff --git a/resources/noat.cards/Database caching, what to cache.md b/resources/noat.cards/Database caching, what to cache.md
new file mode 100644
index 00000000..429e5617
--- /dev/null
+++ b/resources/noat.cards/Database caching, what to cache.md
@@ -0,0 +1,38 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Database caching, what to cache
+
+### Introduction
+
+There are multiple levels you can cache that fall into two general categories: database queries and objects:
+
+- Row level
+- Query-level
+- Fully-formed serializable objects
+- Fully-rendered HTML
+
+Generaly, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
+
+### Caching at the database query level
+
+Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues:
+
+- Hard to delete a cached result with complex queries
+- If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
+
+### Caching at the object level
+
+See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
+
+- Remove the object from cache if its underlying data has changed
+- Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
+
+Suggestions of what to cache:
+
+- User sessions
+- Fully rendered web pages
+- Activity streams
+- User graph data
\ No newline at end of file
diff --git a/resources/noat.cards/Database.md b/resources/noat.cards/Database.md
new file mode 100644
index 00000000..ac0cd006
--- /dev/null
+++ b/resources/noat.cards/Database.md
@@ -0,0 +1,23 @@
++++
+noatcards = True
+isdraft = False
++++
+
+
+# Database
+
+[ ](https://camo.githubusercontent.com/15a7553727e6da98d0de5e9ca3792f6d2b5e92d4/687474703a2f2f692e696d6775722e636f6d2f586b6d3543587a2e706e67)
+_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q) _
+
+### Relational database management system (RDBMS)
+
+A relational database like SQL is a collection of data items organized in tables.
+
+ACID is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction) .
+
+- Atomicity - Each transaction is all or nothing
+- Consistency - Any tranaction will bring the database from one valid state to another
+- Isolation - Excuting transactions concurrently has the same results as if the transactions were executed serially
+- Durability - Once a transaction has been committed, it will remain so
+
+There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
\ No newline at end of file
From 5477da55e4819b090c021b7b5501151fc1f6d97d Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 21 Mar 2021 17:12:09 +0700
Subject: [PATCH 04/11] add more resource
---
resources/noat.cards/Application layer.md | 8 +-
resources/noat.cards/Asynchronism.md | 8 +-
resources/noat.cards/Availability patterns.md | 4 +-
.../noat.cards/Availability vs consistency.md | 10 +--
resources/noat.cards/Base 62.md | 4 +-
resources/noat.cards/Cache locations.md | 9 +--
resources/noat.cards/Cache.md | 2 +-
resources/noat.cards/Communication.md | 2 +-
resources/noat.cards/Consistency patterns.md | 2 +-
resources/noat.cards/Database.md | 2 +-
resources/noat.cards/Denormalization.md | 24 ++++++
resources/noat.cards/Document store.md | 23 ++++++
resources/noat.cards/Domain name system.md | 41 ++++++++++
resources/noat.cards/Federation.md | 25 ++++++
resources/noat.cards/Graph database.md | 24 ++++++
.../Hypertext transfer protocol (HTTP).md | 28 +++++++
resources/noat.cards/Key-value store.md | 26 +++++++
...cy numbers every programmer should know.md | 46 +++++++++++
resources/noat.cards/Latency vs throughput.md | 17 ++++
resources/noat.cards/Load balancer.md | 78 +++++++++++++++++++
.../noat.cards/Remote procedure call (RPC).md | 50 ++++++++++++
.../Representational state transfer (REST).md | 35 +++++++++
.../noat.cards/Reverse proxy (web server).md | 48 ++++++++++++
.../Transmission control protocol (TCP).md | 29 +++++++
.../User datagram protocol (UDP).md | 36 +++++++++
25 files changed, 555 insertions(+), 26 deletions(-)
create mode 100644 resources/noat.cards/Denormalization.md
create mode 100644 resources/noat.cards/Document store.md
create mode 100644 resources/noat.cards/Domain name system.md
create mode 100644 resources/noat.cards/Federation.md
create mode 100644 resources/noat.cards/Graph database.md
create mode 100644 resources/noat.cards/Hypertext transfer protocol (HTTP).md
create mode 100644 resources/noat.cards/Key-value store.md
create mode 100644 resources/noat.cards/Latency numbers every programmer should know.md
create mode 100644 resources/noat.cards/Latency vs throughput.md
create mode 100644 resources/noat.cards/Load balancer.md
create mode 100644 resources/noat.cards/Remote procedure call (RPC).md
create mode 100644 resources/noat.cards/Representational state transfer (REST).md
create mode 100644 resources/noat.cards/Reverse proxy (web server).md
create mode 100644 resources/noat.cards/Transmission control protocol (TCP).md
create mode 100644 resources/noat.cards/User datagram protocol (UDP).md
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index f24e4f3e..978afcbe 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -13,19 +13,19 @@ _[Source: Intro to architecting systems for scale](http://lethain.com/introducti
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers.
-The single responsibility principle advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
+The single responsibility principle advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
-Workers in the application layer also help enable [asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism) .
+Workers in the application layer also help enable [asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism) .
### Microservices
-Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-definied, lightweight mechanism to serve a business goal. [1](https://smartbear.com/learn/api-design/what-are-microservices)
+Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-definied, lightweight mechanism to serve a business goal. [1](https://smartbear.com/learn/api-design/what-are-microservices)
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
### Service Discovery
-Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, ports, etc.
+Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, ports, etc.
### Disadvantage(s) : application layer
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
index 64b44e47..3de6fba5 100644
--- a/resources/noat.cards/Asynchronism.md
+++ b/resources/noat.cards/Asynchronism.md
@@ -19,9 +19,9 @@ Message queues receive, hold, and deliver messages. If an operation is too slow
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
-Redis is useful as a simple message broker but messages can be lost.
+Redis is useful as a simple message broker but messages can be lost.
-RabbitMQ is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
+RabbitMQ is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
Amazon SQS, is hosted but can have high latency and has the possibility of messages being delivered twice.
@@ -29,11 +29,11 @@ Amazon SQS, is hosted but can have high latency and has the possibility of messa
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
-Celery has support for scheduling and primarily has python support.
+Celery has support for scheduling and primarily has python support.
### Back pressure
-If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
+If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
### Disadvantage(s) : asynchronism
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 29f46323..65814a84 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -39,7 +39,7 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
### Disadvantage(s) : master-slave replication
- Additional logic is needed to promote a slave to a master.
-- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
### Master-master replication
@@ -53,7 +53,7 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
- You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
- Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
- Conflict resolution comes more into play as more write nodes are added and as latency increases.
-- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
+- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
### Disadvantage(s) : replication
diff --git a/resources/noat.cards/Availability vs consistency.md b/resources/noat.cards/Availability vs consistency.md
index bea7397f..2da924ff 100644
--- a/resources/noat.cards/Availability vs consistency.md
+++ b/resources/noat.cards/Availability vs consistency.md
@@ -7,14 +7,14 @@ isdraft = False
### CAP theorem
-[ ](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67)
+[ ](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67)
_[Source: CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited) _
In a distributed computer system, you can only support two of the following guarantees:
-- Consistency - Every read receives the most recent write or an error
-- Availability - Every request receives a response, without guarantee that it contains the most recent version of the information
-- Partition Tolerance - The system continues to operate despite arbitrary partitioning due to network failures
+- Consistency - Every read receives the most recent write or an error
+- Availability - Every request receives a response, without guarantee that it contains the most recent version of the information
+- Partition Tolerance - The system continues to operate despite arbitrary partitioning due to network failures
_Networks aren't reliable, so you'll need to support partition tolerance. You'll need to make a software tradeoff between consistency and availability._
@@ -26,7 +26,7 @@ Waiting for a response from the partitioned node might result in a timeout error
Responses return the most recent version of the data, which might not be the latest. Writes might take some time to propagate when the partition is resolved.
-AP is a good choice if the business needs allow for [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) or when the system needs to continue working despite external errors.
+AP is a good choice if the business needs allow for [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) or when the system needs to continue working despite external errors.
### Source(s) and further reading
diff --git a/resources/noat.cards/Base 62.md b/resources/noat.cards/Base 62.md
index b06d1907..e82cca21 100644
--- a/resources/noat.cards/Base 62.md
+++ b/resources/noat.cards/Base 62.md
@@ -8,6 +8,6 @@ isdraft = False
---
## Introduction of base 62
-- Encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
+- Encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
- Only one hash result for the original input and and the operation is deterministic (no randomness involved)
-- Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
\ No newline at end of file
+- Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
\ No newline at end of file
diff --git a/resources/noat.cards/Cache locations.md b/resources/noat.cards/Cache locations.md
index c2da1ef2..7f225332 100644
--- a/resources/noat.cards/Cache locations.md
+++ b/resources/noat.cards/Cache locations.md
@@ -6,10 +6,9 @@ isdraft = False
# Cache locations
-
### Client caching
-Caches can be located on the client side (OS or browser) , [server side](https://github.com/donnemartin/system-design-primer#reverse-proxy) , or in a distinct cache layer.
+Caches can be located on the client side (OS or browser) , [server side](https://github.com/donnemartin/system-design-primer#reverse-proxy) , or in a distinct cache layer.
### CDN caching
@@ -17,7 +16,7 @@ Caches can be located on the client side (OS or browser) , [server side](https:
### Web server caching
-[Reverse proxies](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
+[Reverse proxies](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
### Database caching
@@ -25,14 +24,14 @@ Your database usually includes some level of caching in a default configuration,
### Application caching
-In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
+In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
- Persistence option
- Built-in data structures such as sorted sets and lists
-There are multiple levels you can cache that fall into two general categories: database queries and objects:
+There are multiple levels you can cache that fall into two general categories: database queries and objects:
- Row level
- Query-level
diff --git a/resources/noat.cards/Cache.md b/resources/noat.cards/Cache.md
index c013729b..ae34a915 100644
--- a/resources/noat.cards/Cache.md
+++ b/resources/noat.cards/Cache.md
@@ -16,7 +16,7 @@ Databases often benefit from a uniform distribution of reads and writes across i
### Disadvantage(s) : cache
-- Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
+- Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
- Need to make application changes such as adding Redis or memcached.
- Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
diff --git a/resources/noat.cards/Communication.md b/resources/noat.cards/Communication.md
index e8e434a7..cf3bc5b3 100644
--- a/resources/noat.cards/Communication.md
+++ b/resources/noat.cards/Communication.md
@@ -1,5 +1,5 @@
Communication
-------------
---
-[ ](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067)
+[ ](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067)
_[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html) _
\ No newline at end of file
diff --git a/resources/noat.cards/Consistency patterns.md b/resources/noat.cards/Consistency patterns.md
index 0b732532..5d0aa77c 100644
--- a/resources/noat.cards/Consistency patterns.md
+++ b/resources/noat.cards/Consistency patterns.md
@@ -7,7 +7,7 @@ isdraft = False
## Introduction
-With multiple copies of the same data, we are faced with options on how to synchronize them so clients have a consistent view of the data. Recall the definition of consistency from the [CAP theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) - Every read receives the most recent write or an error.
+With multiple copies of the same data, we are faced with options on how to synchronize them so clients have a consistent view of the data. Recall the definition of consistency from the [CAP theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) - Every read receives the most recent write or an error.
### Weak consistency
diff --git a/resources/noat.cards/Database.md b/resources/noat.cards/Database.md
index ac0cd006..ccc5a3bb 100644
--- a/resources/noat.cards/Database.md
+++ b/resources/noat.cards/Database.md
@@ -20,4 +20,4 @@ ACID is a set of properties of relational database [transactions](https://en.wik
- Isolation - Excuting transactions concurrently has the same results as if the transactions were executed serially
- Durability - Once a transaction has been committed, it will remain so
-There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
\ No newline at end of file
+There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
\ No newline at end of file
diff --git a/resources/noat.cards/Denormalization.md b/resources/noat.cards/Denormalization.md
new file mode 100644
index 00000000..deff2288
--- /dev/null
+++ b/resources/noat.cards/Denormalization.md
@@ -0,0 +1,24 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Denormalization
+
+## Denormalization introduction
+
+Denormalization attemps to improve read performance at the expense of some write performance. Redundant copies of the data are written in multiple tables to avoid expensive joins. Some RDBMS such as [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) and Oracle support [materialized views](https://en.wikipedia.org/wiki/Materialized_view) which handle the work of storing redudant information and keeping redundant copies consistent.
+
+Once data becomes distributed with techniques such as [federation](https://github.com/donnemartin/system-design-primer#federation) and [sharding](https://github.com/donnemartin/system-design-primer#sharding) , managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
+
+In most systems, reads can heavily number writes 100:1 or even 1000:1. A read resulting in a complex database join can be very expensive, spending a significant amount of time on disk operations.
+
+## Disadvantage(s) : denormalization
+
+- Data is duplicated.
+- Constraints can help redundant copies of information stay in sync, which increases complexity of the database design.
+- A denormalized database under heavy write load might perform worse than its normalized counterpart.
+
+## Source(s) and further reading: denormalization
+
+- [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
\ No newline at end of file
diff --git a/resources/noat.cards/Document store.md b/resources/noat.cards/Document store.md
new file mode 100644
index 00000000..0a6e146b
--- /dev/null
+++ b/resources/noat.cards/Document store.md
@@ -0,0 +1,23 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Document store
+
+## Document Store Abstraction: key-value store with documents stored as values
+
+A document store is centered around documents (XML, JSON, binary, etc) , where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. _Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types._
+
+Based on the underlying implementation, documents are organized in either collections, tags, metadata, or directories. Although documents can be organized or grouped together, documents may have fields that are completely different from each other.
+
+Some document stores like [MongoDB](https://www.mongodb.com/mongodb-architecture) and [CouchDB](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/) also provide a SQL-like language to perform complex queries.[DynamoDB](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) supports both key-values and documents.
+
+Document stores provide high flexibility and are often used for working with occasionally changing data.
+
+## Source(s) and further reading: document store
+
+- [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
+- [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
+- [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
+- [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
\ No newline at end of file
diff --git a/resources/noat.cards/Domain name system.md b/resources/noat.cards/Domain name system.md
new file mode 100644
index 00000000..f7e9ff6a
--- /dev/null
+++ b/resources/noat.cards/Domain name system.md
@@ -0,0 +1,41 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Domain name system
+
+## Introduction Domain Name System
+
+
+_[Source: DNS security presentation](http://www.slideshare.net/srikrupa5/dns-security-presentation-issa) _
+
+A Domain Name System (DNS) translates a domain name such as [www.example.com](http://www.example.com/) to an IP address.
+
+DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) .
+
+- NS record (name server) - Specifies the DNS servers for your domain/subdomain.
+- MX record (mail exchange) - Specifies the mail servers for accepting messages.
+- A record (address) - Points a name to an IP address.
+- CNAME (canonical) - Points a name to another name or `CNAME` (example.com to [www.example.com](http://www.example.com/)) or to an `A`record.
+
+Services such as [CloudFlare](https://www.cloudflare.com/dns/) and [Route 53](https://aws.amazon.com/route53/) provide managed DNS services. Some DNS services can route traffic through various methods:
+
+- [Weighted round robin](http://g33kinfo.com/info/archives/2657)
+ - Prevent traffic from going to servers under maintenance
+ - Balance between varying cluster sizes
+ - A/B testing
+- Latency-based
+- Geolocation-based
+
+### Disadvantage(s) : DNS
+
+- Accessing a DNS server introduces a slight delay, although mitigated by caching described above.
+- DNS server management could be complex, although they are generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) .
+- DNS services have recently come under DDoS attack, preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es) .
+
+### Source(s) and further reading
+
+- [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
+- [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
+- [DNS articles](https://support.dnsimple.com/categories/dns/)
\ No newline at end of file
diff --git a/resources/noat.cards/Federation.md b/resources/noat.cards/Federation.md
new file mode 100644
index 00000000..9108a008
--- /dev/null
+++ b/resources/noat.cards/Federation.md
@@ -0,0 +1,25 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Federation
+
+## Introduction about Federation
+
+
+
+_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)_
+
+Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: forums,users, and products, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
+
+## Disadvantage(s) : federation
+
+- Federation is not effective if your schema requires huge functions or tables.
+- You'll need to update your application logic to determine which database to read and write.
+- Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers) .
+- Federation adds more hardware and additional complexity.
+
+## Source(s) and further reading: federation
+
+- [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)
\ No newline at end of file
diff --git a/resources/noat.cards/Graph database.md b/resources/noat.cards/Graph database.md
new file mode 100644
index 00000000..9067acdb
--- /dev/null
+++ b/resources/noat.cards/Graph database.md
@@ -0,0 +1,24 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Graph database
+
+## Abstraction: graph
+
+
+
+
+_[Source: Graph database](https://en.wikipedia.org/wiki/File:GraphDatabase_PropertyGraph.png)_
+
+
+In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
+
+Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) .
+
+## Source(s) and further reading: graph
+
+- [Graph database](https://en.wikipedia.org/wiki/Graph_database)
+- [Neo4j](https://neo4j.com/)
+- [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
\ No newline at end of file
diff --git a/resources/noat.cards/Hypertext transfer protocol (HTTP).md b/resources/noat.cards/Hypertext transfer protocol (HTTP).md
new file mode 100644
index 00000000..cb81c052
--- /dev/null
+++ b/resources/noat.cards/Hypertext transfer protocol (HTTP).md
@@ -0,0 +1,28 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Hypertext transfer protocol (HTTP)
+
+## Introduction about HTTP
+
+HTTP is a method for encoding and transporting data between a client and a server. It is a request/response protocol:
+clients issue requests and servers issue responses with relevant content and completion status info about the request.
+HTTP is self-contained, allowing requests and responses to flow through many intermediate routers and servers that
+perform load balancing, caching, encryption, and compression.
+
+A basic HTTP request consists of a verb (method) and a resource (endpoint) . Below are common HTTP verbs:
+
+| Verb | Description | Idempotent* | Safe | Cacheable |
+|---|---|---|---|---|
+| GET | Reads a resource | Yes | Yes | Yes |
+| POST | Creates a resource or trigger a process that handles data | No | No | Yes if response contains freshness info |
+| PUT | Creates or replace a resource | Yes | No | No |
+| PATCH | Partially updates a resource | No | No | Yes if response contains freshness info |
+| DELETE | Deletes a resource | Yes | No | No |
+
+HTTP is an application layer protocol relying on lower-level protocols such as TCP and UDP.
+
+- [HTTP](https://www.nginx.com/resources/glossary/http/)
+- [README](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
\ No newline at end of file
diff --git a/resources/noat.cards/Key-value store.md b/resources/noat.cards/Key-value store.md
new file mode 100644
index 00000000..1cab5edb
--- /dev/null
+++ b/resources/noat.cards/Key-value store.md
@@ -0,0 +1,26 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Key-value store
+
+## Abstraction: hash table
+
+A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can
+maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order) , allowing efficient
+retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
+
+Key-value stores provide high performance and are often used for simple data models or for rapidly-changing data, such
+as an in-memory cache layer. Since they offer only a limited set of operations, complexity is shifted to the application
+layer if additional operations are needed.
+
+A key-value store is the basis for more complex system systems such as a document store, and in some cases, a graph
+database.
+
+## Source(s) and further reading: key-value store
+
+- [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
+- [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
+- [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
+- [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
\ No newline at end of file
diff --git a/resources/noat.cards/Latency numbers every programmer should know.md b/resources/noat.cards/Latency numbers every programmer should know.md
new file mode 100644
index 00000000..6d95ce6e
--- /dev/null
+++ b/resources/noat.cards/Latency numbers every programmer should know.md
@@ -0,0 +1,46 @@
+### Latency numbers every programmer should know
+---
+ Latency Comparison Numbers
+ --------------------------
+ L1 cache reference 0.5 ns
+ Branch mispredict 5 ns
+ L2 cache reference 7 ns 14x L1 cache
+ Mutex lock/unlock 100 ns
+ Main memory reference 100 ns 20x L2 cache, 200x L1 cache
+ Compress 1K bytes with Zippy 10,000 ns 10 us
+ Send 1 KB bytes over 1 Gbps network 10,000 ns 10 us
+ Read 4 KB randomly from SSD- 150,000 ns 150 us ~1GB/sec SSD
+ Read 1 MB sequentially from memory 250,000 ns 250 us
+ Round trip within same datacenter 500,000 ns 500 us
+ Read 1 MB sequentially from SSD- 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
+ Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
+ Read 1 MB sequentially from 1 Gbps 10,000,000 ns 10,000 us 10 ms 40x memory, 10X SSD
+ Read 1 MB sequentially from disk 30,000,000 ns 30,000 us 30 ms 120x memory, 30X SSD
+ Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
+
+ Notes
+ -----
+ 1 ns = 10^-9 seconds
+ 1 us = 10^-6 seconds = 1,000 ns
+ 1 ms = 10^-3 seconds = 1,000 us = 1,000,000 ns
+
+
+Handy metrics based on numbers above:
+
+- Read sequentially from disk at 30 MB/s
+- Read sequentially from 1 Gbps Ethernet at 100 MB/s
+- Read sequentially from SSD at 1 GB/s
+- Read sequentially from main memory at 4 GB/s
+- 6-7 world-wide round trips per second
+- 2,000 round trips per second within a data center
+
+#### [](https://github.com/donnemartin/system-design-primer#latency-numbers-visualized) Latency numbers visualized
+
+[ ](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
+
+#### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-14) Source(s) and further reading
+
+- [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
+- [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
+- [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
+- [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
\ No newline at end of file
diff --git a/resources/noat.cards/Latency vs throughput.md b/resources/noat.cards/Latency vs throughput.md
new file mode 100644
index 00000000..4fd01633
--- /dev/null
+++ b/resources/noat.cards/Latency vs throughput.md
@@ -0,0 +1,17 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Latency vs throughput
+
+## Latency vs throughput define
+Latency is the time to perform some action or to produce some result.
+
+Throughput is the number of such actions or results per unit of time.
+
+Generally, you should aim for maximal throughput with acceptable latency.
+
+## Source(s) and further reading
+
+- [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
\ No newline at end of file
diff --git a/resources/noat.cards/Load balancer.md b/resources/noat.cards/Load balancer.md
new file mode 100644
index 00000000..b736ea74
--- /dev/null
+++ b/resources/noat.cards/Load balancer.md
@@ -0,0 +1,78 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Load balancer
+
+## Load Balancer Introduction
+
+
+
+_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)_
+
+Load balancers distribute incoming client requests to computing resources such as application servers and databases. In each case, the load balancer returns the response from the computing resource to the appropriate client. Load balancers are effective at:
+
+- Preventing requests from going to unhealthy servers
+- Preventing overloading resources
+- Helping eliminate single points of failure
+
+Load balancers can be implemented with hardware (expensive) or with software such as HAProxy.
+
+## Load Balancer benefits
+
+Additional benefits include:
+
+- SSL termination: Decrypt incoming requests and encrypt server responses so backend servers do not have to perform these potentially expensive operations
+ - Removes the need to install [X.509 certificates](https://en.wikipedia.org/wiki/X.509) on each server
+- Session persistence: Issue cookies and route a specific client's requests to same instance if the web apps do not keep track of sessions
+
+To protect against failures, it's common to set up multiple load balancers, either in [active-passive](https://github.com/donnemartin/system-design-primer#active-passive) or [active-active](https://github.com/donnemartin/system-design-primer#active-active) mode.
+
+## Load Balancer route traffic
+
+Load balancers can route traffic based on various metrics, including:
+
+- Random
+- Least loaded
+- Seesion/cookies
+- [Round robin or weighted round robin](http://g33kinfo.com/info/archives/2657)
+- [Layer 4](https://github.com/donnemartin/system-design-primer#layer-4-load-balancing)
+- [Layer 7](https://github.com/donnemartin/system-design-primer#layer-7-load-balancing)
+
+## Layer 4 load balancing
+
+Layer 4 load balancers look at info at the [transport layer](https://github.com/donnemartin/system-design-primer#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) .
+
+## layer 7 load balancing
+
+Layer 7 load balancers look at the [application layer](https://github.com/donnemartin/system-design-primer#communication) to decide how to distribute requests. This can involve contents of the header, message, and cookies. Layer 7 load balancers terminates network traffic, reads the message, makes a load-balancing decision, then opens a connection to the selected server. For example, a layer 7 load balancer can direct video traffic to servers that host videos while directing more sensitive user billing traffic to security-hardened servers.
+
+At the cost of flexibility, layer 4 load balancing requires less time and computing resources than Layer 7, although the performance impact can be minimal on modern commodity hardware.
+
+## Horizontal scaling
+
+Load balancers can also help with horizontal scaling, improving performance and availability. Scaling out using commodity machines is more cost efficient and results in higher availability than scaling up a single server on more expensive hardware, called Vertical Scaling. It is also easier to hire for talent working on commodity hardware than it is for specialized enterprise systems.
+
+## Disadvantage(s) : horizontal scaling
+
+- Scaling horizontally introduces complexity and involves cloning servers
+ * Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
+ * Sessions can be stored in a centralized data store such as a [database](https://github.com/donnemartin/system-design-primer#database) (SQL, NoSQL) or a persistent [cache](https://github.com/donnemartin/system-design-primer#cache) (Redis, Memcached)
+- Downstream servers such as caches and databases need to handle more simultaneous connections as upstream servers scale out
+
+## Disadvantage(s) : load balancer
+
+- The load balancer can become a performance bottleneck if it does not have enough resources or if it is not configured properly.
+- Introducing a load balancer to help eliminate single points of failure results in increased complexity.
+- A single load balancer is a single point of failure, configuring multiple load balancers further increases complexity.
+
+## Source(s) and further reading
+
+- [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+- [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+- [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
+- [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
+- [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
+- [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
+- [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Remote procedure call (RPC).md b/resources/noat.cards/Remote procedure call (RPC).md
new file mode 100644
index 00000000..dda894b6
--- /dev/null
+++ b/resources/noat.cards/Remote procedure call (RPC).md
@@ -0,0 +1,50 @@
+# Remote procedure call (RPC)
+
+## Remote procedure call introduction
+
+
+
+_[Source: Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)_
+
+In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/) , [Thrift](https://thrift.apache.org/) , and [Avro](https://avro.apache.org/docs/current/) .
+
+## Remote procedure call in detail
+RPC is a request-response protocol:
+
+- Client program - Calls the client stub procedure. The parameters are pushed onto the stack like a local procedure call.
+- Client stub procedure - Marshals (packs) procedure id and arguments into a request message.
+- Client communication module - OS sends the message from the client to the server.
+- Server communication module - OS passes the incoming packets to the server stub procedure.
+- Server stub procedure - Unmarshalls the results, calls the server procedure matching the procedure id and passes the given arguments.
+- The server response repeats the steps above in reverse order.
+
+Sample RPC calls:
+```
+ GET /someoperation?data=anId
+
+ POST /anotheroperation
+ {
+ "data":"anId";
+ "anotherdata": "another value"
+ }
+```
+
+## Remote procedure call under behavior view
+
+RPC is focused on exposing behaviors. RPCs are often used for performance reasons with internal communications, as you can hand-craft native calls to better fit your use cases.
+
+Choose a Native Library aka SDK when:
+
+- You know your target platform.
+- You want to control how your "logic" is accessed
+- You want to control how error control happens off your library
+- Performance and end user experience is your primary concern
+
+HTTP APIs following REST tend to be used more often for public APIs.
+
+#### Disadvantage(s) : RPC
+
+- RPC clients become tightly coupled to the service implementation.
+- A new API must be definied for every new operation or use case.
+- It can be difficult to debug RPC.
+- You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/) .
\ No newline at end of file
diff --git a/resources/noat.cards/Representational state transfer (REST).md b/resources/noat.cards/Representational state transfer (REST).md
new file mode 100644
index 00000000..50a4f61f
--- /dev/null
+++ b/resources/noat.cards/Representational state transfer (REST).md
@@ -0,0 +1,35 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Representational state transfer (REST)
+
+## Representational state transfer introduction
+
+REST is an architectural style enforcing a client/server model where the client acts on a set of resources managed by the server. The server provides a representation of resources and actions that can either manipulate or get a new representation of resources. All communication must be stateless and cacheable.
+
+## RESTful interface
+
+There are four qualities of a RESTful interface:
+
+- Identify resources (URI in HTTP) - use the same URI regardless of any operation.
+- Change with representations (Verbs in HTTP) - use verbs, headers, and body.
+- Self-descriptive error message (status response in HTTP) - Use status codes, don't reinvent the wheel.
+- [HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP) - your web service should be fully accessible in a browser.
+
+Sample REST calls:
+
+```
+ GET /someresources/anId
+
+ PUT /someresources/anId
+ {"anotherdata": "another value"}
+```
+
+REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) , and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
+
+## Disadvantage(s) : REST
+
+- With REST being focused on exposing data, it might not be a good fit if resources are not naturally organized or accessed in a simple hierarchy. For example, returning all updated records from the past hour matching a particular set of events is not easily expressed as a path. With REST, it is likely to be implemented with a combination of URI path, query parameters, and possibly the request body.
+- REST typically relies on a few verbs (GET, POST, PUT, DELETE, and PATCH) which sometimes doesn't fit your use case. For example, moving expired documents to the archive folder might not cleanly fit within these verbs.
\ No newline at end of file
diff --git a/resources/noat.cards/Reverse proxy (web server).md b/resources/noat.cards/Reverse proxy (web server).md
new file mode 100644
index 00000000..e0cf4479
--- /dev/null
+++ b/resources/noat.cards/Reverse proxy (web server).md
@@ -0,0 +1,48 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Reverse proxy (web server)
+
+## Reverse proxy (web server) introduction
+
+
+
+_[Source: Wikipedia](https://commons.wikimedia.org/wiki/File:Proxy_concept_en.svg) _
+
+A reverse proxy is a web server that centralizes internal services and provides unified interfaces to the public. Requests from clients are forwarded to a server that can fulfill it before the reverse proxy returns the server's response to the client.
+
+## Reverse Proxy benefit
+
+Additional benefits include:
+
+- Increased security - Hide information about backend servers, blacklist IPs, limit number of connections per client
+- Increased scalability and flexibility - Clients only see the reverse proxy's IP, allowing you to scale servers or change their configuration
+- SSL termination - Decrypt incoming requests and encrypt server responses so backend servers do not have to perform these potentially expensive operations
+ - Removes the need to install [X.509 certificates](https://en.wikipedia.org/wiki/X.509) on each server
+- Compression - Compress server responses
+- Caching - Return the response for cached requests
+- Static content - Serve static content directly
+ - HTML/CSS/JS
+ - Photos
+ - Videos
+ - Etc
+
+## Load balancer vs reverse proxy
+
+- Deploying a load balancer is useful when you have multiple servers. Often, load balancers route traffic to a set of servers serving the same function.
+- Reverse proxies can be useful even with just one web server or application server, opening up the benefits described in the previous section.
+- Solutions such as NGINX and HAProxy can support both layer 7 reverse proxying and load balancing.
+
+## Disadvantage(s) : reverse proxy
+
+- Introducing a reverse proxy results in increased complexity.
+- A single reverse proxy is a single point of failure, configuring multiple reverse proxies (ie a [failover](https://en.wikipedia.org/wiki/Failover)) further increases complexity.
+
+## Source(s) and further reading
+
+- [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
+- [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+- [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+- [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
\ No newline at end of file
diff --git a/resources/noat.cards/Transmission control protocol (TCP).md b/resources/noat.cards/Transmission control protocol (TCP).md
new file mode 100644
index 00000000..a8c061d3
--- /dev/null
+++ b/resources/noat.cards/Transmission control protocol (TCP).md
@@ -0,0 +1,29 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Transmission control protocol (TCP)
+
+## TCP Introduction
+
+
+
+_[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/)_
+
+TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol) . Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking) . All packets sent are guaranteed to reach the destination in the original order and without corruption through:
+
+- Sequence numbers and [checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) for each packet
+- [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) packets and automatic retransmission
+
+If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) . These guarantees cause delays and generally results in less efficient transmission than UDP.
+
+To ensure high throughput, web servers can keep a large number of TCP connections open, resulting in high memory usage. It can be expensive to have a large number of open connections between web server threads and say, a [memcached](https://github.com/donnemartin/system-design-primer#memcached) server. [Connection pooling](https://en.wikipedia.org/wiki/Connection_pool) can help in addition to switching to UDP where applicable.
+
+
+## Use TCP over UDP when:
+
+TCP is useful for applications that require high reliability but are less time critical. Some examples include web servers, database info, SMTP, FTP, and SSH.
+
+- You need all of the data to arrive in tact
+- You want to automatically make a best estimate use of the network throughput
\ No newline at end of file
diff --git a/resources/noat.cards/User datagram protocol (UDP).md b/resources/noat.cards/User datagram protocol (UDP).md
new file mode 100644
index 00000000..5dd2787f
--- /dev/null
+++ b/resources/noat.cards/User datagram protocol (UDP).md
@@ -0,0 +1,36 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# User datagram protocol (UDP)
+
+## User datagram protocol (UDP) introduction
+
+
+
+_[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/) _
+
+UDP is connectionless. Datagrams (analogous to packets) are guaranteed only at the datagram level. Datagrams might reach their destination out of order or not at all. UDP does not support congestion control. Without the guarantees that TCP support, UDP is generally more efficient.
+
+UDP can broadcast, sending datagrams to all devices on the subnet. This is useful with [DHCP](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol) because the client has not yet received an IP address, thus preventing a way for TCP to stream without the IP address.
+
+
+## Use UDP over TCP when
+
+UDP is less reliable but works well in real time use cases such as VoIP, video chat, streaming, and realtime multiplayer games.
+
+Use UDP over TCP when:
+
+- You need the lowest latency
+- Late data is worse than loss of data
+- You want to implement your own error correction
+
+## Source(s) and further reading: TCP and UDP
+
+- [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
+- [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
+- [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
+- [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
+- [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
+- [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
\ No newline at end of file
From 96c49e1ff37a7e43d75ff9c8dc921e4857ea2a40 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 21 Mar 2021 17:45:39 +0700
Subject: [PATCH 05/11] change to heading level two
---
resources/noat.cards/Application layer.md | 8 +--
resources/noat.cards/Asynchronism.md | 15 ++---
resources/noat.cards/Availability patterns.md | 2 +-
resources/noat.cards/MD5.md | 9 +++
resources/noat.cards/NoSQL.md | 28 ++++++++++
.../noat.cards/Performance vs scalability.md | 20 +++++++
resources/noat.cards/Refresh-ahead.md | 15 +++++
resources/noat.cards/SQL or NoSQL.md | 51 +++++++++++++++++
resources/noat.cards/SQL tuning.md | 56 +++++++++++++++++++
resources/noat.cards/Security.md | 16 ++++++
resources/noat.cards/Sharding.md | 32 +++++++++++
resources/noat.cards/Wide column store.md | 27 +++++++++
.../noat.cards/Write-behind (write-back).md | 22 ++++++++
resources/noat.cards/Write-through.md | 39 +++++++++++++
14 files changed, 328 insertions(+), 12 deletions(-)
create mode 100644 resources/noat.cards/MD5.md
create mode 100644 resources/noat.cards/NoSQL.md
create mode 100644 resources/noat.cards/Performance vs scalability.md
create mode 100644 resources/noat.cards/Refresh-ahead.md
create mode 100644 resources/noat.cards/SQL or NoSQL.md
create mode 100644 resources/noat.cards/SQL tuning.md
create mode 100644 resources/noat.cards/Security.md
create mode 100644 resources/noat.cards/Sharding.md
create mode 100644 resources/noat.cards/Wide column store.md
create mode 100644 resources/noat.cards/Write-behind (write-back).md
create mode 100644 resources/noat.cards/Write-through.md
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index 978afcbe..ae867b58 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -5,9 +5,9 @@ isdraft = False
# Application layer
-### Application layer - Introduction
+## Application layer - Introduction
-[ ](https://camo.githubusercontent.com/feeb549c5b6e94f65c613635f7166dc26e0c7de7/687474703a2f2f692e696d6775722e636f6d2f7942355359776d2e706e67)
+
_[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer) _
@@ -17,7 +17,7 @@ The single responsibility principle advocates for small and autonomous services
Workers in the application layer also help enable [asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism) .
-### Microservices
+## Microservices
Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-definied, lightweight mechanism to serve a business goal. [1](https://smartbear.com/learn/api-design/what-are-microservices)
@@ -32,7 +32,7 @@ Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-t
- Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
- Microservices can add complexity in terms of deployments and operations.
-### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-9) Source(s) and further reading
+### Source(s) and further reading
- [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
- [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
index 3de6fba5..77bd02f1 100644
--- a/resources/noat.cards/Asynchronism.md
+++ b/resources/noat.cards/Asynchronism.md
@@ -5,12 +5,13 @@ isdraft = False
# Asynchronism
-[ ](https://camo.githubusercontent.com/c01ec137453216bbc188e3a8f16da39ec9131234/687474703a2f2f692e696d6775722e636f6d2f353447597353782e706e67)
-_[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer) _
+
+
+[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer)
Asynchronous workflows help reduce request times for expensive operations that would otherwise be performed in-line. They can also help by doing time-consuming work in advance, such as periodic aggregation of data.
-### Message queues
+## Message queues
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
@@ -25,21 +26,21 @@ RabbitMQ is popular but requires you to adapt to the 'AMQP' protocol and manage
Amazon SQS, is hosted but can have high latency and has the possibility of messages being delivered twice.
-### Task queues
+## Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
Celery has support for scheduling and primarily has python support.
-### Back pressure
+## Back pressure
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
-### Disadvantage(s) : asynchronism
+## Disadvantage(s) : asynchronism
- Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
-### Source(s) and further reading
+## Source(s) and further reading
- [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
- [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 65814a84..09f0d3ba 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -7,7 +7,7 @@ isdraft = False
There are two main patterns to support high availability:fail-over and replication.
-### Active-passive (Fail-Over)
+## Active-passive (Fail-Over)
With active-passive fail-over, heartbeats are sent between the active and the passive server on standby. If the heartbeat is interrupted, the passive server takes over the active's IP address and resumes service.
diff --git a/resources/noat.cards/MD5.md b/resources/noat.cards/MD5.md
new file mode 100644
index 00000000..ae14d468
--- /dev/null
+++ b/resources/noat.cards/MD5.md
@@ -0,0 +1,9 @@
++++
+noatcards = True
+isdraft = False
++++
+
+MD5
+---
+- Widely used hashing function that produces a 128-bit hash value
+- Uniformly distributed
\ No newline at end of file
diff --git a/resources/noat.cards/NoSQL.md b/resources/noat.cards/NoSQL.md
new file mode 100644
index 00000000..00c6c19a
--- /dev/null
+++ b/resources/noat.cards/NoSQL.md
@@ -0,0 +1,28 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# NoSQL
+
+## NoSQL introduction
+
+NoSQL is a collection of data items represented in a key-value store, document-store, wide column store, or a graph database. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) .
+
+## NoSQL under BASE principle
+
+BASE is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) , BASE chooses availability over consistency.
+
+- Basically available - the system guarantees availability.
+- Soft state - the state of the system may change over time, even without input.
+- Eventual consistency - the system will become consistent over a period of time, given that the system doesn't receive input during that period.
+
+In addition to choosing between [SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) , it is helpful to understand which type of NoSQL database best fits your use case(s) . We'll review key-value stores, document-stores, wide column stores, and graph databases in the next section.
+
+## Source(s) and further reading: NoSQL
+
+- [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
+- [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
+- [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
+- [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
+- [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Performance vs scalability.md b/resources/noat.cards/Performance vs scalability.md
new file mode 100644
index 00000000..7e9d89bf
--- /dev/null
+++ b/resources/noat.cards/Performance vs scalability.md
@@ -0,0 +1,20 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Performance vs scalability
+
+## Performance vs scalability
+
+A service is scalable if it results in increased performance in a manner proportional to resources added. Generally, increasing performance means serving more units of work, but it can also be to handle larger units of work, such as when datasets grow.[1](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
+
+Another way to look at performance vs scalability:
+
+- If you have a performance problem, your system is slow for a single user.
+- If you have a scalability problem, your system is fast for a single user but slow under heavy load.
+
+### Source(s) and further reading
+
+- [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
+- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
\ No newline at end of file
diff --git a/resources/noat.cards/Refresh-ahead.md b/resources/noat.cards/Refresh-ahead.md
new file mode 100644
index 00000000..e7febddb
--- /dev/null
+++ b/resources/noat.cards/Refresh-ahead.md
@@ -0,0 +1,15 @@
+# Refresh-ahead
+
+## Introduction
+
+
+
+[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
+
+You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration.
+
+Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
+
+## Disadvantage(s) : refresh-ahead
+
+- Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
diff --git a/resources/noat.cards/SQL or NoSQL.md b/resources/noat.cards/SQL or NoSQL.md
new file mode 100644
index 00000000..dc40d907
--- /dev/null
+++ b/resources/noat.cards/SQL or NoSQL.md
@@ -0,0 +1,51 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# SQL or NoSQL
+
+## Reasons for SQL:
+
+
+
+[Source: Transitioning from RDBMS to NoSQL](https://www.infoq.com/articles/Transition-RDBMS-NoSQL/)
+
+
+- Structured data
+- Strict schema
+- Relational data
+- Need for complex joins
+- Transactions
+- Clear patterns for scaling
+- More established: developers, community, code, tools, etc
+- Lookups by index are very fast
+
+## Reasons for NoSQL:
+
+
+
+[Source: Transitioning from RDBMS to NoSQL](https://www.infoq.com/articles/Transition-RDBMS-NoSQL/)
+
+
+- Semi-structured data
+- Dynamic or flexible schema
+- Non relational data
+- No need for complex joins
+- Store many TB (or PB) of data
+- Very data intensive workload
+- Very high throughput for IOPS
+
+## Sample data well-suited for NoSQL:
+
+
+- Rapid ingest of clickstream and log data
+- Leaderboard or scoring data
+- Temporary data, such as a shopping cart
+- Frequently accessed ('hot') tables
+- Metadata/lookup tables
+
+## Source(s) and further reading: SQL or NoSQL
+
+- [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)
+- [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
\ No newline at end of file
diff --git a/resources/noat.cards/SQL tuning.md b/resources/noat.cards/SQL tuning.md
new file mode 100644
index 00000000..bcbb653b
--- /dev/null
+++ b/resources/noat.cards/SQL tuning.md
@@ -0,0 +1,56 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# SQL tuning
+
+## Introduction
+
+SQL tuning is a broad topic and many [books](https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=sql+tuning) have been written as reference.
+
+It's important to benchmark and profile to simulate and uncover bottlenecks.
+
+- Benchmark - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) .
+- Profile - Enable tools such as the [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) to help track performance issues.
+
+Benchmarking and profiling might point you to the following optimizations.
+
+## Tighten up the schema
+
+- MySQL dumps to disk in contiguous blocks for fast access.
+- Use `CHAR` instead of `VARCHAR` for fixed-length fields.
+ - `CHAR` effectively allows for fast, random access, whereas with `VARCHAR`, you must find the end of a string before moving onto the next one.
+- Use `TEXT` for large blocks of text such as blog posts. `TEXT` also allows for boolean searches. Using a `TEXT` field results in storing a pointer on disk that is used to locate the text block.
+- Use `INT` for larger numbers up to 2^32 or 4 billion.
+- Use `DECIMAL` for currency to avoid floating point representation errors.
+- Avoid storing large `BLOBS`, store the location of where to get the object instead.
+- `VARCHAR(255) ` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
+- Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) .
+
+### Use good indices
+
+- Columns that you are querying (`SELECT`, `GROUP BY`, `ORDER BY`, `JOIN`) could be faster with indices.
+- Indices are usually represented as self-balancing [B-tree](https://en.wikipedia.org/wiki/B-tree) that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time.
+- Placing an index can keep the data in memory, requiring more space.
+- Writes could also be slower since the index also needs to be updated.
+- When loading large amounts of data, it might be faster to disable indices, load the data, then rebuild the indices.
+
+## Avoid expensive joins
+
+- [Denormalize](https://github.com/donnemartin/system-design-primer#denormalization) where performance demands it.
+
+## Partition tables
+
+- Break up a table by putting hot spots in a separate table to help keep it in memory.
+
+## Tune the query cache
+
+- In some cases, the [query cache](http://dev.mysql.com/doc/refman/5.7/en/query-cache) could lead to [performance issues](https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/) .
+
+## Source(s) and further reading: SQL tuning
+
+- [Tips for optimizing MySQL queries](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
+- [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
+- [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
+- [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Security.md b/resources/noat.cards/Security.md
new file mode 100644
index 00000000..7af49afd
--- /dev/null
+++ b/resources/noat.cards/Security.md
@@ -0,0 +1,16 @@
+Security
+--------
+---
+This section could use some updates. Consider [contributing](https://github.com/donnemartin/system-design-primer#contributing) !
+
+Security is a broad topic. Unless you have considerable experience, a security background, or are applying for a position that requires knowledge of security, you probably won't need to know more than the basics:
+
+- Encrypt in transit and at rest.
+- Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection) .
+- Use parameterized queries to prevent SQL injection.
+- Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) .
+
+### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-12) Source(s) and further reading
+
+- [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
+- [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
\ No newline at end of file
diff --git a/resources/noat.cards/Sharding.md b/resources/noat.cards/Sharding.md
new file mode 100644
index 00000000..7405f70a
--- /dev/null
+++ b/resources/noat.cards/Sharding.md
@@ -0,0 +1,32 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Sharding
+
+## Introduction
+
+
+
+[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+
+Sharding distributes data across different databases such that each database can only manage a subset of the data. Taking a users database as an example, as the number of users increases, more shards are added to the cluster.
+
+Similar to the advantages of [federation](https://github.com/donnemartin/system-design-primer#federation) , sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
+
+Common ways to shard a table of users is either through the user's last name initial or the user's geographic location.
+
+## Disadvantage(s) : sharding
+
+- You'll need to update your application logic to work with shards, which could result in complex SQL queries.
+- Data distribution can become lobsided in a shard. For example, a set of power users on a shard could result in increased load to that shard compared to others.
+ - Rebalancing adds additional complexity. A sharding function based on [consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html) can reduce the amount of transferred data.
+- Joining data from multiple shards is more complex.
+- Sharding adds more hardware and additional complexity.
+
+## Source(s) and further reading: sharding
+
+- [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
+- [Shard database architecture](https://en.wikipedia.org/wiki/Shard_(database_architecture))
+- [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Wide column store.md b/resources/noat.cards/Wide column store.md
new file mode 100644
index 00000000..b0aa828c
--- /dev/null
+++ b/resources/noat.cards/Wide column store.md
@@ -0,0 +1,27 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Wide column store
+
+## introduction
+
+
+
+[Source: SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
+
+> Abstraction: nested map `ColumnFamily>`
+
+A wide column store's basic unit of data is a column (name/value pair) . A column can be grouped in column families (analogous to a SQL table) . Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
+
+Google introduced [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) as the first wide column store, which influenced the open-source [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) often-used in the Hadoop ecosystem, and [Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html) from Facebook. Stores such as BigTable, HBase, and Cassandra maintain keys in lexicographic order, allowing efficient retrieval of selective key ranges.
+
+Wide column stores offer high availability and high scalability. They are often used for very large data sets.
+
+## Source(s) and further reading: wide column store
+
+- [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
+- [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
+- [HBase architecture](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
+- [Cassandra architecture](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Write-behind (write-back).md b/resources/noat.cards/Write-behind (write-back).md
new file mode 100644
index 00000000..321e9ee5
--- /dev/null
+++ b/resources/noat.cards/Write-behind (write-back).md
@@ -0,0 +1,22 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Write-behind (write-back)
+
+## Introduction
+
+
+
+[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+
+In write-behind, tha application does the following:
+
+- Add/update entry in cache
+- Asynchronously write entry to the data store, improving write performance
+
+## Disadvantage(s) : write-behind
+
+- There could be data loss if the cache goes down prior to its contents hitting the data store.
+- It is more complex to implement write-behind than it is to implement cache-aside or write-through.
\ No newline at end of file
diff --git a/resources/noat.cards/Write-through.md b/resources/noat.cards/Write-through.md
new file mode 100644
index 00000000..95046025
--- /dev/null
+++ b/resources/noat.cards/Write-through.md
@@ -0,0 +1,39 @@
++++
+noatcards = True
+isdraft = False
++++
+
+# Write-through
+
+## Write-through introduction
+
+
+
+[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+
+The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database:
+
+- Application adds/updates entry in cache
+- Cache synchronously writes entry to data store
+- Return
+
+Application code:
+
+```
+ set_user(12345, {"foo":"bar"})
+```
+
+Cache code:
+
+```
+def set_user(user_id, values) :
+ user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
+ cache.set(user_id, user)
+```
+
+Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
+
+## Disadvantage(s) : write through
+
+- When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue.
+- Most data written might never read, which can be minimized with a TTL.
\ No newline at end of file
From 477576ddca6df15692f548b2a3e20bdfcadc8828 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 21 Mar 2021 17:55:58 +0700
Subject: [PATCH 06/11] formatting
---
resources/noat.cards/Application layer.md | 6 +++---
resources/noat.cards/Availability patterns.md | 21 ++++++++++---------
.../noat.cards/Availability vs consistency.md | 9 ++++----
resources/noat.cards/Cache locations.md | 10 ++++-----
resources/noat.cards/Cache-aside.md | 3 ++-
resources/noat.cards/Cache.md | 9 ++++----
resources/noat.cards/Communication.md | 4 ++--
resources/noat.cards/Consistency patterns.md | 8 +++----
.../noat.cards/Content delivery network.md | 11 +++++-----
.../Database caching, what to cache.md | 6 +++---
resources/noat.cards/Database.md | 7 +++++--
resources/noat.cards/Domain name system.md | 4 ++--
.../Hypertext transfer protocol (HTTP).md | 2 +-
...cy numbers every programmer should know.md | 2 +-
resources/noat.cards/Load balancer.md | 4 ++--
.../noat.cards/Performance vs scalability.md | 2 +-
resources/noat.cards/SQL tuning.md | 2 +-
resources/noat.cards/Security.md | 2 +-
18 files changed, 60 insertions(+), 52 deletions(-)
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index ae867b58..2c370154 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -23,16 +23,16 @@ Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Mic
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
-### Service Discovery
+## Service Discovery
Systems such as [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, ports, etc.
-### Disadvantage(s) : application layer
+## Disadvantage(s) : application layer
- Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
- Microservices can add complexity in terms of deployments and operations.
-### Source(s) and further reading
+## Source(s) and further reading
- [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
- [Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 09f0d3ba..1ba96e8b 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -15,7 +15,7 @@ The length of downtime is determined by whether the passive server is already ru
Active-passive failover can also be referred to as master-slave failover.
-### Active-active (Fail-Over)
+## Active-active (Fail-Over)
In active-active, both servers are managing traffic, spreading the load between them.
@@ -23,39 +23,40 @@ If the servers are public-facing, the DNS would need to know about the public IP
Active-active failover can also be referred to as master-master failover.
-### Disadvantage(s) : failover
+## Disadvantage(s) : failover
- Fail-over adds more hardware and additional complexity.
- There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
-### Master-slave replication
+## Master-slave replication
The master serves reads and writes, replicating writes to one or more slaves, which serve only reads. Slaves can also replicate to additional slaves in a tree-like fashion. If the master goes offline, the system can continue to operate in read-only mode until a slave is promoted to a master or a new master is provisioned.
-[ ](https://camo.githubusercontent.com/6a097809b9690236258747d969b1d3e0d93bb8ca/687474703a2f2f692e696d6775722e636f6d2f4339696f47746e2e706e67)
+
+
_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
-### Disadvantage(s) : master-slave replication
+## Disadvantage(s) : master-slave replication
- Additional logic is needed to promote a slave to a master.
- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
-### Master-master replication
+## Master-master replication
Both masters serve reads and writes and coordinate with each other on writes. If either master goes down, the system can continue to operate with both reads and writes.
-[ ](https://camo.githubusercontent.com/5862604b102ee97d85f86f89edda44bde85a5b7f/687474703a2f2f692e696d6775722e636f6d2f6b7241484c47672e706e67)
+
_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
-### Disadvantage(s) : master-master replication
+## Disadvantage(s) : master-master replication
- You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
- Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
- Conflict resolution comes more into play as more write nodes are added and as latency increases.
- See [Disadvantage(s) : replication](https://github.com/donnemartin/system-design-primer#disadvantages-replication) for points related to both master-slave and master-master.
-### Disadvantage(s) : replication
+## Disadvantage(s) : replication
- There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
- Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
@@ -63,7 +64,7 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
- On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
- Replication adds more hardware and additional complexity.
-### Source(s) and further reading: replication
+## Source(s) and further reading: replication
- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
- [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
\ No newline at end of file
diff --git a/resources/noat.cards/Availability vs consistency.md b/resources/noat.cards/Availability vs consistency.md
index 2da924ff..abbe8651 100644
--- a/resources/noat.cards/Availability vs consistency.md
+++ b/resources/noat.cards/Availability vs consistency.md
@@ -5,10 +5,11 @@ isdraft = False
# Availability vs consistency
-### CAP theorem
+## CAP theorem
-[ ](https://camo.githubusercontent.com/13719354da7dcd34cd79ff5f8b6306a67bc18261/687474703a2f2f692e696d6775722e636f6d2f62674c4d4932752e706e67)
-_[Source: CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited) _
+
+
+[Source: CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited)
In a distributed computer system, you can only support two of the following guarantees:
@@ -28,7 +29,7 @@ Responses return the most recent version of the data, which might not be the lat
AP is a good choice if the business needs allow for [eventual consistency](https://github.com/donnemartin/system-design-primer#eventual-consistency) or when the system needs to continue working despite external errors.
-### Source(s) and further reading
+## Source(s) and further reading
- [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
- [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
diff --git a/resources/noat.cards/Cache locations.md b/resources/noat.cards/Cache locations.md
index 7f225332..7227f3c0 100644
--- a/resources/noat.cards/Cache locations.md
+++ b/resources/noat.cards/Cache locations.md
@@ -6,23 +6,23 @@ isdraft = False
# Cache locations
-### Client caching
+## Client caching
Caches can be located on the client side (OS or browser) , [server side](https://github.com/donnemartin/system-design-primer#reverse-proxy) , or in a distinct cache layer.
-### CDN caching
+## CDN caching
[CDNs](https://github.com/donnemartin/system-design-primer#content-delivery-network) are considered a type of cache.
-### Web server caching
+## Web server caching
[Reverse proxies](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
-### Database caching
+## Database caching
Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
-### Application caching
+## Application caching
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
diff --git a/resources/noat.cards/Cache-aside.md b/resources/noat.cards/Cache-aside.md
index 240e747c..a0da8157 100644
--- a/resources/noat.cards/Cache-aside.md
+++ b/resources/noat.cards/Cache-aside.md
@@ -7,7 +7,8 @@ isdraft = False
## Introduction
-[ ](https://camo.githubusercontent.com/7f5934e49a678b67f65e5ed53134bc258b007ebb/687474703a2f2f692e696d6775722e636f6d2f4f4e6a4f52716b2e706e67)
+
+
_[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast) _
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
diff --git a/resources/noat.cards/Cache.md b/resources/noat.cards/Cache.md
index ae34a915..903f1bb1 100644
--- a/resources/noat.cards/Cache.md
+++ b/resources/noat.cards/Cache.md
@@ -6,21 +6,22 @@ isdraft = False
# Cache
-### Cache - Introduction
-[ ](https://camo.githubusercontent.com/7acedde6aa7853baf2eb4a53f88e2595ebe43756/687474703a2f2f692e696d6775722e636f6d2f51367a32344c612e706e67)
+## Cache - Introduction
+
+
_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html) _
Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
Databases often benefit from a uniform distribution of reads and writes across its partitions. Popular items can skew the distribution, causing bottlenecks. Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
-### Disadvantage(s) : cache
+## Disadvantage(s) : cache
- Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
- Need to make application changes such as adding Redis or memcached.
- Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
-### Source(s) and further reading
+## Source(s) and further reading
- [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
- [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
diff --git a/resources/noat.cards/Communication.md b/resources/noat.cards/Communication.md
index cf3bc5b3..17c02490 100644
--- a/resources/noat.cards/Communication.md
+++ b/resources/noat.cards/Communication.md
@@ -1,5 +1,5 @@
-Communication
+# Communication
-------------
---
-[ ](https://camo.githubusercontent.com/1d761d5688d28ce1fb12a0f1c8191bca96eece4c/687474703a2f2f692e696d6775722e636f6d2f354b656f6351732e6a7067)
+
_[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html) _
\ No newline at end of file
diff --git a/resources/noat.cards/Consistency patterns.md b/resources/noat.cards/Consistency patterns.md
index 5d0aa77c..6a4bd22f 100644
--- a/resources/noat.cards/Consistency patterns.md
+++ b/resources/noat.cards/Consistency patterns.md
@@ -9,24 +9,24 @@ isdraft = False
With multiple copies of the same data, we are faced with options on how to synchronize them so clients have a consistent view of the data. Recall the definition of consistency from the [CAP theorem](https://github.com/donnemartin/system-design-primer#cap-theorem) - Every read receives the most recent write or an error.
-### Weak consistency
+## Weak consistency
After a write, reads may or may not see it. A best effort approach is taken.
This approach is seen in systems such as memcached. Weak consistency works well in real time use cases such as VoIP, video chat, and realtime multiplayer games. For example, if you are on a phone call and lose reception for a few seconds, when you regain connection you do not hear what was spoken during connection loss.
-### Eventual consistency
+## Eventual consistency
After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
-### Strong consistency
+## Strong consistency
After a write, reads will see it. Data is replicated synchronously.
This approach is seen in file systems and RDBMSes. Strong consistency works well in systems that need transactions.
-### Source(s) and further reading
+## Source(s) and further reading
- [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Content delivery network.md b/resources/noat.cards/Content delivery network.md
index 8dc05c6b..36e4c196 100644
--- a/resources/noat.cards/Content delivery network.md
+++ b/resources/noat.cards/Content delivery network.md
@@ -7,7 +7,8 @@ isdraft = False
# Content delivery network
-[ ](https://camo.githubusercontent.com/853a8603651149c686bf3c504769fc594ff08849/687474703a2f2f692e696d6775722e636f6d2f683954417547492e6a7067)
+
+
_[Source: Why use a CDN](https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/) _
A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JSS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact.
@@ -17,13 +18,13 @@ Serving content from CDNs can significantly improve performance in two ways:
- Users receive content at data centers close to them
- Your servers do not have to serve requests that the CDN fulfills
-### Push CDNs
+## Push CDNs
Push CDNs receive new content whenever changes occur on your server. You take full responsibility for providing content, uploading directly to the CDN and rewriting URLs to point to the CDN. You can configure when content expires and when it is updated. Content is uploaded only when it is new or changed, minimizing traffic, but maximizing storage.
Sites with a small amount of traffic or sites with content that isn't often updated work well with push CDNs. Content is placed on the CDNs once, instead of being re-pulled at regular intervals.
-### Pull CDNs
+## Pull CDNs
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the server.
@@ -31,13 +32,13 @@ Pull CDNs grab new content from your server when the first user requests the con
Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
-### Disadvantage(s) : CDN
+## Disadvantage(s) : CDN
- CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
- Content might be stale if it is updated before the TTL expires it.
- CDNs require changing URLs for static content to point to the CDN.
-### Source(s) and further reading
+## Source(s) and further reading
- [Globally distributed content delivery](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
- [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
diff --git a/resources/noat.cards/Database caching, what to cache.md b/resources/noat.cards/Database caching, what to cache.md
index 429e5617..ef618c49 100644
--- a/resources/noat.cards/Database caching, what to cache.md
+++ b/resources/noat.cards/Database caching, what to cache.md
@@ -5,7 +5,7 @@ isdraft = False
# Database caching, what to cache
-### Introduction
+## Introduction
There are multiple levels you can cache that fall into two general categories: database queries and objects:
@@ -16,14 +16,14 @@ There are multiple levels you can cache that fall into two general categories: d
Generaly, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
-### Caching at the database query level
+## Caching at the database query level
Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues:
- Hard to delete a cached result with complex queries
- If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
-### Caching at the object level
+## Caching at the object level
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
diff --git a/resources/noat.cards/Database.md b/resources/noat.cards/Database.md
index ccc5a3bb..a1f74dd1 100644
--- a/resources/noat.cards/Database.md
+++ b/resources/noat.cards/Database.md
@@ -6,10 +6,13 @@ isdraft = False
# Database
-[ ](https://camo.githubusercontent.com/15a7553727e6da98d0de5e9ca3792f6d2b5e92d4/687474703a2f2f692e696d6775722e636f6d2f586b6d3543587a2e706e67)
+
+## Relational database management system (RDBMS)
+
+
+
_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q) _
-### Relational database management system (RDBMS)
A relational database like SQL is a collection of data items organized in tables.
diff --git a/resources/noat.cards/Domain name system.md b/resources/noat.cards/Domain name system.md
index f7e9ff6a..387a32ae 100644
--- a/resources/noat.cards/Domain name system.md
+++ b/resources/noat.cards/Domain name system.md
@@ -28,13 +28,13 @@ Services such as [CloudFlare](https://www.cloudflare.com/dns/) and [Route 53](h
- Latency-based
- Geolocation-based
-### Disadvantage(s) : DNS
+## Disadvantage(s) : DNS
- Accessing a DNS server introduces a slight delay, although mitigated by caching described above.
- DNS server management could be complex, although they are generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) .
- DNS services have recently come under DDoS attack, preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es) .
-### Source(s) and further reading
+## Source(s) and further reading
- [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
- [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
diff --git a/resources/noat.cards/Hypertext transfer protocol (HTTP).md b/resources/noat.cards/Hypertext transfer protocol (HTTP).md
index cb81c052..114a7fa0 100644
--- a/resources/noat.cards/Hypertext transfer protocol (HTTP).md
+++ b/resources/noat.cards/Hypertext transfer protocol (HTTP).md
@@ -14,7 +14,7 @@ perform load balancing, caching, encryption, and compression.
A basic HTTP request consists of a verb (method) and a resource (endpoint) . Below are common HTTP verbs:
-| Verb | Description | Idempotent* | Safe | Cacheable |
+| Verb | Description | Idempotent\* | Safe | Cacheable |
|---|---|---|---|---|
| GET | Reads a resource | Yes | Yes | Yes |
| POST | Creates a resource or trigger a process that handles data | No | No | Yes if response contains freshness info |
diff --git a/resources/noat.cards/Latency numbers every programmer should know.md b/resources/noat.cards/Latency numbers every programmer should know.md
index 6d95ce6e..437a779c 100644
--- a/resources/noat.cards/Latency numbers every programmer should know.md
+++ b/resources/noat.cards/Latency numbers every programmer should know.md
@@ -1,4 +1,4 @@
-### Latency numbers every programmer should know
+## Latency numbers every programmer should know
---
Latency Comparison Numbers
--------------------------
diff --git a/resources/noat.cards/Load balancer.md b/resources/noat.cards/Load balancer.md
index b736ea74..e1123039 100644
--- a/resources/noat.cards/Load balancer.md
+++ b/resources/noat.cards/Load balancer.md
@@ -57,8 +57,8 @@ Load balancers can also help with horizontal scaling, improving performance and
## Disadvantage(s) : horizontal scaling
- Scaling horizontally introduces complexity and involves cloning servers
- * Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
- * Sessions can be stored in a centralized data store such as a [database](https://github.com/donnemartin/system-design-primer#database) (SQL, NoSQL) or a persistent [cache](https://github.com/donnemartin/system-design-primer#cache) (Redis, Memcached)
+ - Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
+ - Sessions can be stored in a centralized data store such as a [database](https://github.com/donnemartin/system-design-primer#database) (SQL, NoSQL) or a persistent [cache](https://github.com/donnemartin/system-design-primer#cache) (Redis, Memcached)
- Downstream servers such as caches and databases need to handle more simultaneous connections as upstream servers scale out
## Disadvantage(s) : load balancer
diff --git a/resources/noat.cards/Performance vs scalability.md b/resources/noat.cards/Performance vs scalability.md
index 7e9d89bf..679ce55c 100644
--- a/resources/noat.cards/Performance vs scalability.md
+++ b/resources/noat.cards/Performance vs scalability.md
@@ -14,7 +14,7 @@ Another way to look at performance vs scalability:
- If you have a performance problem, your system is slow for a single user.
- If you have a scalability problem, your system is fast for a single user but slow under heavy load.
-### Source(s) and further reading
+## Source(s) and further reading
- [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
- [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
\ No newline at end of file
diff --git a/resources/noat.cards/SQL tuning.md b/resources/noat.cards/SQL tuning.md
index bcbb653b..a6e96c23 100644
--- a/resources/noat.cards/SQL tuning.md
+++ b/resources/noat.cards/SQL tuning.md
@@ -28,7 +28,7 @@ Benchmarking and profiling might point you to the following optimizations.
- `VARCHAR(255) ` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
- Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) .
-### Use good indices
+## Use good indices
- Columns that you are querying (`SELECT`, `GROUP BY`, `ORDER BY`, `JOIN`) could be faster with indices.
- Indices are usually represented as self-balancing [B-tree](https://en.wikipedia.org/wiki/B-tree) that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time.
diff --git a/resources/noat.cards/Security.md b/resources/noat.cards/Security.md
index 7af49afd..a0e0ec17 100644
--- a/resources/noat.cards/Security.md
+++ b/resources/noat.cards/Security.md
@@ -10,7 +10,7 @@ Security is a broad topic. Unless you have considerable experience, a security b
- Use parameterized queries to prevent SQL injection.
- Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) .
-### [](https://github.com/donnemartin/system-design-primer#sources-and-further-reading-12) Source(s) and further reading
+## Source(s) and further reading
- [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
- [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
\ No newline at end of file
From ade04566de3d361b288a88950650993cf6220650 Mon Sep 17 00:00:00 2001
From: Vu
Date: Sun, 21 Mar 2021 18:33:05 +0700
Subject: [PATCH 07/11] remove wrong format symbol
---
resources/noat.cards/Application layer.md | 2 +-
resources/noat.cards/Availability patterns.md | 4 ++--
resources/noat.cards/Cache-aside.md | 2 +-
resources/noat.cards/Cache.md | 2 +-
resources/noat.cards/Communication.md | 2 +-
resources/noat.cards/Content delivery network.md | 2 +-
resources/noat.cards/Database.md | 2 +-
resources/noat.cards/Domain name system.md | 2 +-
resources/noat.cards/Federation.md | 2 +-
resources/noat.cards/Graph database.md | 2 +-
resources/noat.cards/Load balancer.md | 2 +-
resources/noat.cards/Remote procedure call (RPC).md | 2 +-
resources/noat.cards/Reverse proxy (web server).md | 2 +-
resources/noat.cards/Transmission control protocol (TCP).md | 2 +-
resources/noat.cards/User datagram protocol (UDP).md | 2 +-
15 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/resources/noat.cards/Application layer.md b/resources/noat.cards/Application layer.md
index 2c370154..a575e07b 100644
--- a/resources/noat.cards/Application layer.md
+++ b/resources/noat.cards/Application layer.md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer) _
+[Source: Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer)
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers.
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 1ba96e8b..3872f0f9 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -35,7 +35,7 @@ The master serves reads and writes, replicating writes to one or more slaves, wh

-_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
+[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## Disadvantage(s) : master-slave replication
@@ -47,7 +47,7 @@ _[Source: Scalability, availability, stability, patterns](http://www.slideshare.
Both masters serve reads and writes and coordinate with each other on writes. If either master goes down, the system can continue to operate with both reads and writes.

-_[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/) _
+[Source: Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## Disadvantage(s) : master-master replication
diff --git a/resources/noat.cards/Cache-aside.md b/resources/noat.cards/Cache-aside.md
index a0da8157..0f6cc429 100644
--- a/resources/noat.cards/Cache-aside.md
+++ b/resources/noat.cards/Cache-aside.md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast) _
+[Source: From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
diff --git a/resources/noat.cards/Cache.md b/resources/noat.cards/Cache.md
index 903f1bb1..ddb743b6 100644
--- a/resources/noat.cards/Cache.md
+++ b/resources/noat.cards/Cache.md
@@ -9,7 +9,7 @@ isdraft = False
## Cache - Introduction

-_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html) _
+[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
diff --git a/resources/noat.cards/Communication.md b/resources/noat.cards/Communication.md
index 17c02490..88b969d2 100644
--- a/resources/noat.cards/Communication.md
+++ b/resources/noat.cards/Communication.md
@@ -2,4 +2,4 @@
-------------
---

-_[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html) _
\ No newline at end of file
+[Source: OSI 7 layer model](http://www.escotal.com/osilayer.html)
\ No newline at end of file
diff --git a/resources/noat.cards/Content delivery network.md b/resources/noat.cards/Content delivery network.md
index 36e4c196..6326c73e 100644
--- a/resources/noat.cards/Content delivery network.md
+++ b/resources/noat.cards/Content delivery network.md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: Why use a CDN](https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/) _
+[Source: Why use a CDN](https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/)
A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JSS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact.
diff --git a/resources/noat.cards/Database.md b/resources/noat.cards/Database.md
index a1f74dd1..172928dd 100644
--- a/resources/noat.cards/Database.md
+++ b/resources/noat.cards/Database.md
@@ -11,7 +11,7 @@ isdraft = False

-_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q) _
+[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)
A relational database like SQL is a collection of data items organized in tables.
diff --git a/resources/noat.cards/Domain name system.md b/resources/noat.cards/Domain name system.md
index 387a32ae..fc7b2fb4 100644
--- a/resources/noat.cards/Domain name system.md
+++ b/resources/noat.cards/Domain name system.md
@@ -8,7 +8,7 @@ isdraft = False
## Introduction Domain Name System

-_[Source: DNS security presentation](http://www.slideshare.net/srikrupa5/dns-security-presentation-issa) _
+[Source: DNS security presentation](http://www.slideshare.net/srikrupa5/dns-security-presentation-issa)
A Domain Name System (DNS) translates a domain name such as [www.example.com](http://www.example.com/) to an IP address.
diff --git a/resources/noat.cards/Federation.md b/resources/noat.cards/Federation.md
index 9108a008..005e4f5d 100644
--- a/resources/noat.cards/Federation.md
+++ b/resources/noat.cards/Federation.md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)_
+[Source: Scaling up to your first 10 million users](https://www.youtube.com/watch?v=vg5onp8TU6Q)
Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: forums,users, and products, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
diff --git a/resources/noat.cards/Graph database.md b/resources/noat.cards/Graph database.md
index 9067acdb..c135faad 100644
--- a/resources/noat.cards/Graph database.md
+++ b/resources/noat.cards/Graph database.md
@@ -10,7 +10,7 @@ isdraft = False

-_[Source: Graph database](https://en.wikipedia.org/wiki/File:GraphDatabase_PropertyGraph.png)_
+[Source: Graph database](https://en.wikipedia.org/wiki/File:GraphDatabase_PropertyGraph.png)
In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
diff --git a/resources/noat.cards/Load balancer.md b/resources/noat.cards/Load balancer.md
index e1123039..669cda48 100644
--- a/resources/noat.cards/Load balancer.md
+++ b/resources/noat.cards/Load balancer.md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)_
+[Source: Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
Load balancers distribute incoming client requests to computing resources such as application servers and databases. In each case, the load balancer returns the response from the computing resource to the appropriate client. Load balancers are effective at:
diff --git a/resources/noat.cards/Remote procedure call (RPC).md b/resources/noat.cards/Remote procedure call (RPC).md
index dda894b6..baa231f6 100644
--- a/resources/noat.cards/Remote procedure call (RPC).md
+++ b/resources/noat.cards/Remote procedure call (RPC).md
@@ -4,7 +4,7 @@

-_[Source: Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)_
+[Source: Crack the system design interview](http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/)
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/) , [Thrift](https://thrift.apache.org/) , and [Avro](https://avro.apache.org/docs/current/) .
diff --git a/resources/noat.cards/Reverse proxy (web server).md b/resources/noat.cards/Reverse proxy (web server).md
index e0cf4479..f3288512 100644
--- a/resources/noat.cards/Reverse proxy (web server).md
+++ b/resources/noat.cards/Reverse proxy (web server).md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: Wikipedia](https://commons.wikimedia.org/wiki/File:Proxy_concept_en.svg) _
+[Source: Wikipedia](https://commons.wikimedia.org/wiki/File:Proxy_concept_en.svg)
A reverse proxy is a web server that centralizes internal services and provides unified interfaces to the public. Requests from clients are forwarded to a server that can fulfill it before the reverse proxy returns the server's response to the client.
diff --git a/resources/noat.cards/Transmission control protocol (TCP).md b/resources/noat.cards/Transmission control protocol (TCP).md
index a8c061d3..f5cdcac7 100644
--- a/resources/noat.cards/Transmission control protocol (TCP).md
+++ b/resources/noat.cards/Transmission control protocol (TCP).md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/)_
+[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/)
TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol) . Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking) . All packets sent are guaranteed to reach the destination in the original order and without corruption through:
diff --git a/resources/noat.cards/User datagram protocol (UDP).md b/resources/noat.cards/User datagram protocol (UDP).md
index 5dd2787f..ab537ae3 100644
--- a/resources/noat.cards/User datagram protocol (UDP).md
+++ b/resources/noat.cards/User datagram protocol (UDP).md
@@ -9,7 +9,7 @@ isdraft = False

-_[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/) _
+[Source: How to make a multiplayer game](http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/)
UDP is connectionless. Datagrams (analogous to packets) are guaranteed only at the datagram level. Datagrams might reach their destination out of order or not at all. UDP does not support congestion control. Without the guarantees that TCP support, UDP is generally more efficient.
From c0531421c882fc64f430ed1a6dfa29dad59cf75f Mon Sep 17 00:00:00 2001
From: Vu
Date: Fri, 26 Mar 2021 22:15:35 +0700
Subject: [PATCH 08/11] add weigh to well organize
---
resources/noat.cards/Asynchronism.md | 1 +
resources/noat.cards/Availability patterns.md | 1 +
2 files changed, 2 insertions(+)
diff --git a/resources/noat.cards/Asynchronism.md b/resources/noat.cards/Asynchronism.md
index 77bd02f1..9ae199ed 100644
--- a/resources/noat.cards/Asynchronism.md
+++ b/resources/noat.cards/Asynchronism.md
@@ -1,6 +1,7 @@
+++
noatcards = True
isdraft = False
+weight = 100
+++
# Asynchronism
diff --git a/resources/noat.cards/Availability patterns.md b/resources/noat.cards/Availability patterns.md
index 3872f0f9..d413f065 100644
--- a/resources/noat.cards/Availability patterns.md
+++ b/resources/noat.cards/Availability patterns.md
@@ -1,6 +1,7 @@
+++
noatcards = True
isdraft = False
+weight = 120
+++
# Availability patterns
From 9b92b8963b5c84e3bfb60cbe367d705ffe452220 Mon Sep 17 00:00:00 2001
From: Vu
Date: Fri, 26 Mar 2021 23:50:38 +0700
Subject: [PATCH 09/11] revert to origin
---
.github/PULL_REQUEST_TEMPLATE.md | 6 +-
CONTRIBUTING.md | 14 +-
LICENSE.txt | 2 +-
README-ja.md | 774 ++++++++--------
README-zh-Hans.md | 764 ++++++++--------
README-zh-TW.md | 860 +++++++++---------
README.md | 832 ++++++++---------
TRANSLATIONS.md | 80 +-
generate-epub.sh | 2 +-
.../call_center/call_center.ipynb | 84 +-
.../call_center/call_center.py | 82 +-
.../deck_of_cards/deck_of_cards.ipynb | 60 +-
.../deck_of_cards/deck_of_cards.py | 58 +-
.../hash_table/hash_map.ipynb | 32 +-
.../hash_table/hash_map.py | 30 +-
.../lru_cache/lru_cache.ipynb | 42 +-
.../lru_cache/lru_cache.py | 40 +-
.../online_chat/online_chat.ipynb | 62 +-
.../online_chat/online_chat.py | 60 +-
.../parking_lot/parking_lot.ipynb | 80 +-
.../parking_lot/parking_lot.py | 80 +-
.../system_design/mint/README-zh-Hans.md | 186 ++--
solutions/system_design/mint/README.md | 180 ++--
.../system_design/mint/mint_mapreduce.py | 44 +-
solutions/system_design/mint/mint_snippets.py | 20 +-
.../system_design/pastebin/README-zh-Hans.md | 152 ++--
solutions/system_design/pastebin/README.md | 142 +--
solutions/system_design/pastebin/pastebin.py | 34 +-
.../query_cache/README-zh-Hans.md | 142 +--
solutions/system_design/query_cache/README.md | 140 +--
.../query_cache/query_cache_snippets.py | 52 +-
.../sales_rank/README-zh-Hans.md | 186 ++--
solutions/system_design/sales_rank/README.md | 184 ++--
.../sales_rank/sales_rank_mapreduce.py | 72 +-
.../scaling_aws/README-zh-Hans.md | 96 +-
solutions/system_design/scaling_aws/README.md | 98 +-
.../social_graph/README-zh-Hans.md | 172 ++--
.../system_design/social_graph/README.md | 176 ++--
.../social_graph/social_graph_snippets.py | 44 +-
.../system_design/twitter/README-zh-Hans.md | 114 +--
solutions/system_design/twitter/README.md | 116 +--
.../web_crawler/README-zh-Hans.md | 154 ++--
solutions/system_design/web_crawler/README.md | 154 ++--
.../web_crawler/web_crawler_mapreduce.py | 14 +-
.../web_crawler/web_crawler_snippets.py | 52 +-
45 files changed, 3384 insertions(+), 3384 deletions(-)
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 93f40e1d..ca9bd979 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,11 +1,11 @@
## Review the Contributing Guidelines
-Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) .
+Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md).
### Translations
-See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) . Verify you've:
+See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md). Verify you've:
-* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
+* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
* Prefixed the title with a language code
* Example: "ja: Fix ..."
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index db116e60..69348619 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -7,14 +7,14 @@ Contributions are welcome!
## Bug Reports
-For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues) .
+For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues).
## Pull Requests
The preferred way to contribute is to fork the
[main repository](https://github.com/donnemartin/system-design-primer) on GitHub.
-1. Fork the [main repository](https://github.com/donnemartin/system-design-primer) . Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
+1. Fork the [main repository](https://github.com/donnemartin/system-design-primer). Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
2. Clone this copy to your local disk:
@@ -38,7 +38,7 @@ The preferred way to contribute is to fork the
### GitHub Pull Requests Docs
-If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/) .
+If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/).
## Translations
@@ -48,7 +48,7 @@ We'd like for the guide to be available in many languages. Here is the process f
* Translations follow the content of the original. Contributors must speak at least some English, so that translations do not diverge.
* Each translation has a maintainer to update the translation as the original evolves and to review others' changes. This doesn't require a lot of time, but a review by the maintainer is important to maintain quality.
-See [Translations](TRANSLATIONS.md) .
+See [Translations](TRANSLATIONS.md).
### Changes to translations
@@ -56,7 +56,7 @@ See [Translations](TRANSLATIONS.md) .
* Changes that improve translations should be made directly on the file for that language. Pull requests should only modify one language at a time.
* Submit a pull request with changes to the file in that language. Each language has a maintainer, who reviews changes in that language. Then the primary maintainer [@donnemartin](https://github.com/donnemartin) merges it in.
* Prefix pull requests and issues with language codes if they are for that translation only, e.g. "es: Improve grammar", so maintainers can find them easily.
-* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md) .
+* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md).
* You will need to get a review from a native speaker (preferably the language maintainer) before your pull request is merged.
### Adding translations to new languages
@@ -64,9 +64,9 @@ See [Translations](TRANSLATIONS.md) .
Translations to new languages are always welcome! Keep in mind a transation must be maintained.
* Do you have time to be a maintainer for a new language? Please see the list of [translations](TRANSLATIONS.md) and tell us so we know we can count on you in the future.
-* Check the [translations](TRANSLATIONS.md) , issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
+* Check the [translations](TRANSLATIONS.md), issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
* If a translation has not yet been started, file an issue for your language so people know you are working on it and we'll coordinate. Confirm you are native level in the language and are willing to maintain the translation, so it's not orphaned.
-* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/) , i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country) . For languages that have variations, use the shortest tag, such as `zh-Hant`.
+* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/), i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country). For languages that have variations, use the shortest tag, such as `zh-Hant`.
* Feel free to invite friends to help your original translation by having them fork your repo, then merging their pull requests to your forked repo. Translations are difficult and usually have errors that others need to find.
* Add links to your translation at the top of every README-XX.md file. For consistency, the link should be added in alphabetical order by ISO code, and the anchor text should be in the native language.
* When you've fully translated the English README.md, comment on the pull request in the main repo that it's ready to be merged.
diff --git a/LICENSE.txt b/LICENSE.txt
index e2527f91..5a04d642 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1,6 +1,6 @@
I am providing code and resources in this repository to you under an open source
license. Because this is my personal repository, the license you receive to my
-code and resources is from me and not my employer (Facebook) .
+code and resources is from me and not my employer (Facebook).
Copyright 2017 Donne Martin
diff --git a/README-ja.md b/README-ja.md
index 739a7c5f..ce116705 100644
--- a/README-ja.md
+++ b/README-ja.md
@@ -1,4 +1,4 @@
-*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
+*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
# システム設計入門
@@ -35,11 +35,11 @@
面接準備に役立つその他のトピック:
-* [学習指針](#学習指針)
-* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
-* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
-* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
-* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
+* [学習指針](#学習指針)
+* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
+* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
+* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
+* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
## 暗記カード
@@ -50,24 +50,24 @@
この[Anki用フラッシュカードデッキ](https://apps.ankiweb.net/) は、間隔反復を活用して、システム設計のキーコンセプトの学習を支援します。
-* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
-* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
-* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
+* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
+* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
+* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
外出先や移動中の勉強に役立つでしょう。
### コーディング技術課題用の問題: 練習用インタラクティブアプリケーション
-コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
+コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
-Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges) , which contains an additional Anki deck:
+Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges), which contains an additional Anki deck:
-* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
+* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## Contributing
@@ -80,11 +80,11 @@ Feel free to submit pull requests to help:
* Fix errors
* Improve sections
* Add new sections
-* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
+* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
-Content that needs some polishing is placed [under development](#under-development) .
+Content that needs some polishing is placed [under development](#under-development).
-Review the [Contributing Guidelines](CONTRIBUTING.md) .
+Review the [Contributing Guidelines](CONTRIBUTING.md).
## Index of system design topics
@@ -97,93 +97,93 @@ Review the [Contributing Guidelines](CONTRIBUTING.md) .
-* [System design topics: start here](#system-design-topics-start-here)
- * [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
- * [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
- * [Next steps](#next-steps)
-* [Performance vs scalability](#performance-vs-scalability)
-* [Latency vs throughput](#latency-vs-throughput)
-* [Availability vs consistency](#availability-vs-consistency)
- * [CAP theorem](#cap-theorem)
- * [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
- * [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
-* [Consistency patterns](#consistency-patterns)
- * [Weak consistency](#weak-consistency)
- * [Eventual consistency](#eventual-consistency)
- * [Strong consistency](#strong-consistency)
-* [Availability patterns](#availability-patterns)
- * [Fail-over](#fail-over)
- * [Replication](#replication)
- * [Availability in numbers](#availability-in-numbers)
-* [Domain name system](#domain-name-system)
-* [Content delivery network](#content-delivery-network)
- * [Push CDNs](#push-cdns)
- * [Pull CDNs](#pull-cdns)
-* [Load balancer](#load-balancer)
- * [Active-passive](#active-passive)
- * [Active-active](#active-active)
- * [Layer 4 load balancing](#layer-4-load-balancing)
- * [Layer 7 load balancing](#layer-7-load-balancing)
- * [Horizontal scaling](#horizontal-scaling)
-* [Reverse proxy (web server) ](#reverse-proxy-web-server)
- * [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
-* [Application layer](#application-layer)
- * [Microservices](#microservices)
- * [Service discovery](#service-discovery)
-* [Database](#database)
- * [Relational database management system (RDBMS) ](#relational-database-management-system-rdbms)
- * [Master-slave replication](#master-slave-replication)
- * [Master-master replication](#master-master-replication)
- * [Federation](#federation)
- * [Sharding](#sharding)
- * [Denormalization](#denormalization)
- * [SQL tuning](#sql-tuning)
- * [NoSQL](#nosql)
- * [Key-value store](#key-value-store)
- * [Document store](#document-store)
- * [Wide column store](#wide-column-store)
- * [Graph Database](#graph-database)
- * [SQL or NoSQL](#sql-or-nosql)
-* [Cache](#cache)
- * [Client caching](#client-caching)
- * [CDN caching](#cdn-caching)
- * [Web server caching](#web-server-caching)
- * [Database caching](#database-caching)
- * [Application caching](#application-caching)
- * [Caching at the database query level](#caching-at-the-database-query-level)
- * [Caching at the object level](#caching-at-the-object-level)
- * [When to update the cache](#when-to-update-the-cache)
- * [Cache-aside](#cache-aside)
- * [Write-through](#write-through)
- * [Write-behind (write-back) ](#write-behind-write-back)
- * [Refresh-ahead](#refresh-ahead)
-* [Asynchronism](#asynchronism)
- * [Message queues](#message-queues)
- * [Task queues](#task-queues)
- * [Back pressure](#back-pressure)
-* [Communication](#communication)
- * [Transmission control protocol (TCP) ](#transmission-control-protocol-tcp)
- * [User datagram protocol (UDP) ](#user-datagram-protocol-udp)
- * [Remote procedure call (RPC) ](#remote-procedure-call-rpc)
- * [Representational state transfer (REST) ](#representational-state-transfer-rest)
-* [Security](#security)
-* [Appendix](#appendix)
- * [Powers of two table](#powers-of-two-table)
- * [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
- * [Additional system design interview questions](#additional-system-design-interview-questions)
- * [Real world architectures](#real-world-architectures)
- * [Company architectures](#company-architectures)
- * [Company engineering blogs](#company-engineering-blogs)
-* [Under development](#under-development)
-* [Credits](#credits)
-* [Contact info](#contact-info)
-* [License](#license)
+* [System design topics: start here](#system-design-topics-start-here)
+ * [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
+ * [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
+ * [Next steps](#next-steps)
+* [Performance vs scalability](#performance-vs-scalability)
+* [Latency vs throughput](#latency-vs-throughput)
+* [Availability vs consistency](#availability-vs-consistency)
+ * [CAP theorem](#cap-theorem)
+ * [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
+ * [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
+* [Consistency patterns](#consistency-patterns)
+ * [Weak consistency](#weak-consistency)
+ * [Eventual consistency](#eventual-consistency)
+ * [Strong consistency](#strong-consistency)
+* [Availability patterns](#availability-patterns)
+ * [Fail-over](#fail-over)
+ * [Replication](#replication)
+ * [Availability in numbers](#availability-in-numbers)
+* [Domain name system](#domain-name-system)
+* [Content delivery network](#content-delivery-network)
+ * [Push CDNs](#push-cdns)
+ * [Pull CDNs](#pull-cdns)
+* [Load balancer](#load-balancer)
+ * [Active-passive](#active-passive)
+ * [Active-active](#active-active)
+ * [Layer 4 load balancing](#layer-4-load-balancing)
+ * [Layer 7 load balancing](#layer-7-load-balancing)
+ * [Horizontal scaling](#horizontal-scaling)
+* [Reverse proxy (web server)](#reverse-proxy-web-server)
+ * [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
+* [Application layer](#application-layer)
+ * [Microservices](#microservices)
+ * [Service discovery](#service-discovery)
+* [Database](#database)
+ * [Relational database management system (RDBMS)](#relational-database-management-system-rdbms)
+ * [Master-slave replication](#master-slave-replication)
+ * [Master-master replication](#master-master-replication)
+ * [Federation](#federation)
+ * [Sharding](#sharding)
+ * [Denormalization](#denormalization)
+ * [SQL tuning](#sql-tuning)
+ * [NoSQL](#nosql)
+ * [Key-value store](#key-value-store)
+ * [Document store](#document-store)
+ * [Wide column store](#wide-column-store)
+ * [Graph Database](#graph-database)
+ * [SQL or NoSQL](#sql-or-nosql)
+* [Cache](#cache)
+ * [Client caching](#client-caching)
+ * [CDN caching](#cdn-caching)
+ * [Web server caching](#web-server-caching)
+ * [Database caching](#database-caching)
+ * [Application caching](#application-caching)
+ * [Caching at the database query level](#caching-at-the-database-query-level)
+ * [Caching at the object level](#caching-at-the-object-level)
+ * [When to update the cache](#when-to-update-the-cache)
+ * [Cache-aside](#cache-aside)
+ * [Write-through](#write-through)
+ * [Write-behind (write-back)](#write-behind-write-back)
+ * [Refresh-ahead](#refresh-ahead)
+* [Asynchronism](#asynchronism)
+ * [Message queues](#message-queues)
+ * [Task queues](#task-queues)
+ * [Back pressure](#back-pressure)
+* [Communication](#communication)
+ * [Transmission control protocol (TCP)](#transmission-control-protocol-tcp)
+ * [User datagram protocol (UDP)](#user-datagram-protocol-udp)
+ * [Remote procedure call (RPC)](#remote-procedure-call-rpc)
+ * [Representational state transfer (REST)](#representational-state-transfer-rest)
+* [Security](#security)
+* [Appendix](#appendix)
+ * [Powers of two table](#powers-of-two-table)
+ * [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
+ * [Additional system design interview questions](#additional-system-design-interview-questions)
+ * [Real world architectures](#real-world-architectures)
+ * [Company architectures](#company-architectures)
+ * [Company engineering blogs](#company-engineering-blogs)
+* [Under development](#under-development)
+* [Credits](#credits)
+* [Contact info](#contact-info)
+* [License](#license)
## Study guide
-> Suggested topics to review based on your interview timeline (short, medium, long) .
+> Suggested topics to review based on your interview timeline (short, medium, long).
-
+
**Q: For interviews, do I need to know everything here?**
@@ -245,10 +245,10 @@ Outline a high level design with all important components.
### Step 3: Design core components
-Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md) , discuss:
+Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md), discuss:
* Generating and storing a hash of the full url
- * [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
+ * [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
* Hash collisions
* SQL or NoSQL
* Database schema
@@ -265,24 +265,24 @@ Identify and address bottlenecks, given the constraints. For example, do you ne
* Caching
* Database sharding
-Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics) .
+Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics).
### Back-of-the-envelope calculations
You might be asked to do some estimates by hand. Refer to the [Appendix](#appendix) for the following resources:
-* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
-* [Powers of two table](#powers-of-two-table)
-* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
+* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
+* [Powers of two table](#powers-of-two-table)
+* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
### Source(s) and further reading
Check out the following links to get a better idea of what to expect:
-* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
-* [The system design interview](http://www.hiredintech.com/system-design)
-* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
-* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
+* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
+* [The system design interview](http://www.hiredintech.com/system-design)
+* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
+* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
## System design interview questions with solutions
@@ -302,53 +302,53 @@ Check out the following links to get a better idea of what to expect:
| Design a system that scales to millions of users on AWS | [Solution](solutions/system_design/scaling_aws/README.md) |
| Add a system design question | [Contribute](#contributing) |
-### Design Pastebin.com (or Bit.ly)
+### Design Pastebin.com (or Bit.ly)
-[View exercise and solution](solutions/system_design/pastebin/README.md)
+[View exercise and solution](solutions/system_design/pastebin/README.md)
-
+
-### Design the Twitter timeline and search (or Facebook feed and search)
+### Design the Twitter timeline and search (or Facebook feed and search)
-[View exercise and solution](solutions/system_design/twitter/README.md)
+[View exercise and solution](solutions/system_design/twitter/README.md)
-
+
### Design a web crawler
-[View exercise and solution](solutions/system_design/web_crawler/README.md)
+[View exercise and solution](solutions/system_design/web_crawler/README.md)
-
+
### Design Mint.com
-[View exercise and solution](solutions/system_design/mint/README.md)
+[View exercise and solution](solutions/system_design/mint/README.md)
-
+
### Design the data structures for a social network
-[View exercise and solution](solutions/system_design/social_graph/README.md)
+[View exercise and solution](solutions/system_design/social_graph/README.md)
-
+
### Design a key-value store for a search engine
-[View exercise and solution](solutions/system_design/query_cache/README.md)
+[View exercise and solution](solutions/system_design/query_cache/README.md)
-
+
### Design Amazon's sales ranking by category feature
-[View exercise and solution](solutions/system_design/sales_rank/README.md)
+[View exercise and solution](solutions/system_design/sales_rank/README.md)
-
+
### Design a system that scales to millions of users on AWS
-[View exercise and solution](solutions/system_design/scaling_aws/README.md)
+[View exercise and solution](solutions/system_design/scaling_aws/README.md)
-
+
## Object-oriented design interview questions with solutions
@@ -360,13 +360,13 @@ Check out the following links to get a better idea of what to expect:
| Question | |
|---|---|
-| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
-| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
-| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
-| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
-| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
-| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
-| Design a circular array | [Contribute](#contributing) |
+| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
+| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
+| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
+| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
+| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
+| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
+| Design a circular array | [Contribute](#contributing) |
| Add an object-oriented design question | [Contribute](#contributing) |
## System design topics: start here
@@ -377,7 +377,7 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 1: Review the scalability video lecture
-[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
+[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
* Topics covered:
* Vertical scaling
@@ -389,13 +389,13 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 2: Review the scalability article
-[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
+[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
* Topics covered:
- * [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
- * [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
- * [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
- * [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
+ * [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
+ * [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
+ * [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
+ * [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### Next steps
@@ -420,8 +420,8 @@ Another way to look at performance vs scalability:
### Source(s) and further reading
-* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## Latency vs throughput
@@ -433,7 +433,7 @@ Generally, you should aim for **maximal throughput** with **acceptable latency**
### Source(s) and further reading
-* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
+* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## Availability vs consistency
@@ -465,10 +465,10 @@ AP is a good choice if the business needs allow for [eventual consistency](#even
### Source(s) and further reading
-* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
-* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
-* [CAP FAQ](https://github.com/henryr/cap-faq)
-* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
+* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
+* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
+* [CAP FAQ](https://github.com/henryr/cap-faq)
+* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
## Consistency patterns
@@ -482,7 +482,7 @@ This approach is seen in systems such as memcached. Weak consistency works well
### Eventual consistency
-After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
+After a write, reads will eventually see it (typically within milliseconds). Data is replicated asynchronously.
This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
@@ -494,7 +494,7 @@ This approach is seen in file systems and RDBMSes. Strong consistency works wel
### Source(s) and further reading
-* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
+* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
## Availability patterns
@@ -518,7 +518,7 @@ If the servers are public-facing, the DNS would need to know about the public IP
Active-active failover can also be referred to as master-master failover.
-### Disadvantage(s) : failover
+### Disadvantage(s): failover
* Fail-over adds more hardware and additional complexity.
* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
@@ -529,8 +529,8 @@ Active-active failover can also be referred to as master-master failover.
This topic is further discussed in the [Database](#database) section:
-* [Master-slave replication](#master-slave-replication)
-* [Master-master replication](#master-master-replication)
+* [Master-slave replication](#master-slave-replication)
+* [Master-master replication](#master-master-replication)
### Availability in numbers
@@ -563,7 +563,7 @@ If a service consists of multiple components prone to failure, the service's ove
Overall availability decreases when two components with availability < 100% are in sequence:
```
-Availability (Total) = Availability (Foo) * Availability (Bar)
+Availability (Total) = Availability (Foo) * Availability (Bar)
```
If both `Foo` and `Bar` each had 99.9% availability, their total availability in sequence would be 99.8%.
@@ -588,33 +588,33 @@ If both `Foo` and `Bar` each had 99.9% availability, their total availability in
A Domain Name System (DNS) translates a domain name such as www.example.com to an IP address.
-DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) .
+DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL)](https://en.wikipedia.org/wiki/Time_to_live).
-* **NS record (name server) ** - Specifies the DNS servers for your domain/subdomain.
-* **MX record (mail exchange) ** - Specifies the mail servers for accepting messages.
-* **A record (address) ** - Points a name to an IP address.
-* **CNAME (canonical) ** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
+* **NS record (name server)** - Specifies the DNS servers for your domain/subdomain.
+* **MX record (mail exchange)** - Specifies the mail servers for accepting messages.
+* **A record (address)** - Points a name to an IP address.
+* **CNAME (canonical)** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
Services such as [CloudFlare](https://www.cloudflare.com/dns/) and [Route 53](https://aws.amazon.com/route53/) provide managed DNS services. Some DNS services can route traffic through various methods:
-* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
+* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* Prevent traffic from going to servers under maintenance
* Balance between varying cluster sizes
* A/B testing
-* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
-* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
+* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
+* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
-### Disadvantage(s) : DNS
+### Disadvantage(s): DNS
* Accessing a DNS server introduces a slight delay, although mitigated by caching described above.
-* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) .
-* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/) , preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es) .
+* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729).
+* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/), preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es).
### Source(s) and further reading
-* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
-* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
-* [DNS articles](https://support.dnsimple.com/categories/dns/)
+* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
+* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
+* [DNS articles](https://support.dnsimple.com/categories/dns/)
## Content delivery network
@@ -641,11 +641,11 @@ Sites with a small amount of traffic or sites with content that isn't often upda
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the CDN.
-A [time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
+A [time-to-live (TTL)](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
-### Disadvantage(s) : CDN
+### Disadvantage(s): CDN
* CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
* Content might be stale if it is updated before the TTL expires it.
@@ -653,9 +653,9 @@ Sites with heavy traffic work well with pull CDNs, as traffic is spread out more
### Source(s) and further reading
-* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
-* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
-* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
+* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
+* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
+* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
## Load balancer
@@ -686,13 +686,13 @@ Load balancers can route traffic based on various metrics, including:
* Random
* Least loaded
* Session/cookies
-* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
-* [Layer 4](#layer-4-load-balancing)
-* [Layer 7](#layer-7-load-balancing)
+* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
+* [Layer 4](#layer-4-load-balancing)
+* [Layer 7](#layer-7-load-balancing)
### Layer 4 load balancing
-Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) .
+Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT)](https://www.nginx.com/resources/glossary/layer-4-load-balancing/).
### Layer 7 load balancing
@@ -704,14 +704,14 @@ At the cost of flexibility, layer 4 load balancing requires less time and comput
Load balancers can also help with horizontal scaling, improving performance and availability. Scaling out using commodity machines is more cost efficient and results in higher availability than scaling up a single server on more expensive hardware, called **Vertical Scaling**. It is also easier to hire for talent working on commodity hardware than it is for specialized enterprise systems.
-#### Disadvantage(s) : horizontal scaling
+#### Disadvantage(s): horizontal scaling
* Scaling horizontally introduces complexity and involves cloning servers
* Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
- * Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
+ * Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
* Downstream servers such as caches and databases need to handle more simultaneous connections as upstream servers scale out
-### Disadvantage(s) : load balancer
+### Disadvantage(s): load balancer
* The load balancer can become a performance bottleneck if it does not have enough resources or if it is not configured properly.
* Introducing a load balancer to help eliminate a single point of failure results in increased complexity.
@@ -719,15 +719,15 @@ Load balancers can also help with horizontal scaling, improving performance and
### Source(s) and further reading
-* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
-* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
-* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
+* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
-* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
-* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
-* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
+* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
+* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
+* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
-## Reverse proxy (web server)
+## Reverse proxy (web server)
@@ -758,17 +758,17 @@ Additional benefits include:
* Reverse proxies can be useful even with just one web server or application server, opening up the benefits described in the previous section.
* Solutions such as NGINX and HAProxy can support both layer 7 reverse proxying and load balancing.
-### Disadvantage(s) : reverse proxy
+### Disadvantage(s): reverse proxy
* Introducing a reverse proxy results in increased complexity.
* A single reverse proxy is a single point of failure, configuring multiple reverse proxies (ie a [failover](https://en.wikipedia.org/wiki/Failover)) further increases complexity.
### Source(s) and further reading
-* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
-* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
-* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
-* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
+* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
+* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
+* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
+* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
## Application layer
@@ -780,30 +780,30 @@ Additional benefits include:
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers. The **single responsibility principle** advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
-Workers in the application layer also help enable [asynchronism](#asynchronism) .
+Workers in the application layer also help enable [asynchronism](#asynchronism).
### Microservices
-Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. 1
+Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices), which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. 1
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
### Service Discovery
-Systems such as [Consul](https://www.consul.io/docs/index.html) , [Etcd](https://coreos.com/etcd/docs/latest) , and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
+Systems such as [Consul](https://www.consul.io/docs/index.html), [Etcd](https://coreos.com/etcd/docs/latest), and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
-### Disadvantage(s) : application layer
+### Disadvantage(s): application layer
-* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
+* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system).
* Microservices can add complexity in terms of deployments and operations.
### Source(s) and further reading
-* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
-* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
-* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
-* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
-* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
+* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
+* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
+* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
+* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
+* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## Database
@@ -813,11 +813,11 @@ Systems such as [Consul](https://www.consul.io/docs/index.html) , [Etcd](https:/
Source: Scaling up to your first 10 million users
-### Relational database management system (RDBMS)
+### Relational database management system (RDBMS)
A relational database like SQL is a collection of data items organized in tables.
-**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction) .
+**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction).
* **Atomicity** - Each transaction is all or nothing
* **Consistency** - Any transaction will bring the database from one valid state to another
@@ -836,10 +836,10 @@ The master serves reads and writes, replicating writes to one or more slaves, wh
Source: Scalability, availability, stability, patterns
-##### Disadvantage(s) : master-slave replication
+##### Disadvantage(s): master-slave replication
* Additional logic is needed to promote a slave to a master.
-* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
+* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
#### Master-master replication
@@ -851,14 +851,14 @@ Both masters serve reads and writes and coordinate with each other on writes. I
Source: Scalability, availability, stability, patterns
-##### Disadvantage(s) : master-master replication
+##### Disadvantage(s): master-master replication
* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
* Conflict resolution comes more into play as more write nodes are added and as latency increases.
-* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
+* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
-##### Disadvantage(s) : replication
+##### Disadvantage(s): replication
* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
@@ -868,8 +868,8 @@ Both masters serve reads and writes and coordinate with each other on writes. I
##### Source(s) and further reading: replication
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
-* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
#### Federation
@@ -881,16 +881,16 @@ Both masters serve reads and writes and coordinate with each other on writes. I
Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: **forums**, **users**, and **products**, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
-##### Disadvantage(s) : federation
+##### Disadvantage(s): federation
* Federation is not effective if your schema requires huge functions or tables.
* You'll need to update your application logic to determine which database to read and write.
-* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers) .
+* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers).
* Federation adds more hardware and additional complexity.
##### Source(s) and further reading: federation
-* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
+* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
#### Sharding
@@ -902,11 +902,11 @@ Federation (or functional partitioning) splits up databases by function. For ex
Sharding distributes data across different databases such that each database can only manage a subset of the data. Taking a users database as an example, as the number of users increases, more shards are added to the cluster.
-Similar to the advantages of [federation](#federation) , sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
+Similar to the advantages of [federation](#federation), sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
Common ways to shard a table of users is either through the user's last name initial or the user's geographic location.
-##### Disadvantage(s) : sharding
+##### Disadvantage(s): sharding
* You'll need to update your application logic to work with shards, which could result in complex SQL queries.
* Data distribution can become lopsided in a shard. For example, a set of power users on a shard could result in increased load to that shard compared to others.
@@ -916,19 +916,19 @@ Common ways to shard a table of users is either through the user's last name ini
##### Source(s) and further reading: sharding
-* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
+* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [Shard database architecture](https://en.wikipedia.org/wiki/Shard_(database_architecture))
-* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
+* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### Denormalization
Denormalization attempts to improve read performance at the expense of some write performance. Redundant copies of the data are written in multiple tables to avoid expensive joins. Some RDBMS such as [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) and Oracle support [materialized views](https://en.wikipedia.org/wiki/Materialized_view) which handle the work of storing redundant information and keeping redundant copies consistent.
-Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding) , managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
+Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding), managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A read resulting in a complex database join can be very expensive, spending a significant amount of time on disk operations.
-##### Disadvantage(s) : denormalization
+##### Disadvantage(s): denormalization
* Data is duplicated.
* Constraints can help redundant copies of information stay in sync, which increases complexity of the database design.
@@ -936,7 +936,7 @@ In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A rea
###### Source(s) and further reading: denormalization
-* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
+* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
#### SQL tuning
@@ -944,7 +944,7 @@ SQL tuning is a broad topic and many [books](https://www.amazon.com/s/ref=nb_sb_
It's important to **benchmark** and **profile** to simulate and uncover bottlenecks.
-* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) .
+* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html).
* **Profile** - Enable tools such as the [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) to help track performance issues.
Benchmarking and profiling might point you to the following optimizations.
@@ -958,8 +958,8 @@ Benchmarking and profiling might point you to the following optimizations.
* Use `INT` for larger numbers up to 2^32 or 4 billion.
* Use `DECIMAL` for currency to avoid floating point representation errors.
* Avoid storing large `BLOBS`, store the location of where to get the object instead.
-* `VARCHAR(255) ` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
-* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) .
+* `VARCHAR(255)` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
+* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search).
##### Use good indices
@@ -979,32 +979,32 @@ Benchmarking and profiling might point you to the following optimizations.
##### Tune the query cache
-* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/) .
+* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/).
##### Source(s) and further reading: SQL tuning
-* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
-* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
-* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
-* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
+* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
+* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
+* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
+* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
-NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency) .
+NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency).
-**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem) , BASE chooses availability over consistency.
+**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem), BASE chooses availability over consistency.
* **Basically available** - the system guarantees availability.
* **Soft state** - the state of the system may change over time, even without input.
* **Eventual consistency** - the system will become consistent over a period of time, given that the system doesn't receive input during that period.
-In addition to choosing between [SQL or NoSQL](#sql-or-nosql) , it is helpful to understand which type of NoSQL database best fits your use case(s) . We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
+In addition to choosing between [SQL or NoSQL](#sql-or-nosql), it is helpful to understand which type of NoSQL database best fits your use case(s). We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
#### Key-value store
> Abstraction: hash table
-A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order) , allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
+A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order), allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
Key-value stores provide high performance and are often used for simple data models or for rapidly-changing data, such as an in-memory cache layer. Since they offer only a limited set of operations, complexity is shifted to the application layer if additional operations are needed.
@@ -1012,16 +1012,16 @@ A key-value store is the basis for more complex systems such as a document store
##### Source(s) and further reading: key-value store
-* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
-* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
-* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
-* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
+* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
+* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
+* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
+* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### Document store
> Abstraction: key-value store with documents stored as values
-A document store is centered around documents (XML, JSON, binary, etc) , where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
+A document store is centered around documents (XML, JSON, binary, etc), where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
Based on the underlying implementation, documents are organized by collections, tags, metadata, or directories. Although documents can be organized or grouped together, documents may have fields that are completely different from each other.
@@ -1031,10 +1031,10 @@ Document stores provide high flexibility and are often used for working with occ
##### Source(s) and further reading: document store
-* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
-* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
-* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
-* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
+* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
+* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
+* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
+* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### Wide column store
@@ -1046,7 +1046,7 @@ Document stores provide high flexibility and are often used for working with occ
> Abstraction: nested map `ColumnFamily>`
-A wide column store's basic unit of data is a column (name/value pair) . A column can be grouped in column families (analogous to a SQL table) . Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
+A wide column store's basic unit of data is a column (name/value pair). A column can be grouped in column families (analogous to a SQL table). Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
Google introduced [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) as the first wide column store, which influenced the open-source [HBase](https://www.edureka.co/blog/hbase-architecture/) often-used in the Hadoop ecosystem, and [Cassandra](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html) from Facebook. Stores such as BigTable, HBase, and Cassandra maintain keys in lexicographic order, allowing efficient retrieval of selective key ranges.
@@ -1054,10 +1054,10 @@ Wide column stores offer high availability and high scalability. They are often
##### Source(s) and further reading: wide column store
-* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
-* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
-* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
-* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
+* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
+* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
+* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
+* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
#### Graph database
@@ -1071,21 +1071,21 @@ Wide column stores offer high availability and high scalability. They are often
In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
-Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest) .
+Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest).
##### Source(s) and further reading: graph
-* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
-* [Neo4j](https://neo4j.com/)
-* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
+* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
+* [Neo4j](https://neo4j.com/)
+* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### Source(s) and further reading: NoSQL
-* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
-* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
-* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
-* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
-* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
+* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
+* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
+* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
+* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
+* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQL or NoSQL
@@ -1126,8 +1126,8 @@ Sample data well-suited for NoSQL:
##### Source(s) and further reading: SQL or NoSQL
-* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
-* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
+* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
+* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
## Cache
@@ -1143,7 +1143,7 @@ Databases often benefit from a uniform distribution of reads and writes across i
### Client caching
-Caches can be located on the client side (OS or browser) , [server side](#reverse-proxy-web-server) , or in a distinct cache layer.
+Caches can be located on the client side (OS or browser), [server side](#reverse-proxy-web-server), or in a distinct cache layer.
### CDN caching
@@ -1159,7 +1159,7 @@ Your database usually includes some level of caching in a default configuration,
### Application caching
-In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
+In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
@@ -1184,7 +1184,7 @@ Whenever you query the database, hash the query as a key and store the result to
### Caching at the object level
-See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
+See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s):
* Remove the object from cache if its underlying data has changed
* Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
@@ -1216,12 +1216,12 @@ The application is responsible for reading and writing from storage. The cache
* Return entry
```python
-def get_user(self, user_id) :
- user = cache.get("user.{0}", user_id)
+def get_user(self, user_id):
+ user = cache.get("user.{0}", user_id)
if user is None:
- user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
+ user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
- key = "user.{0}".format(user_id)
+ key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@@ -1230,7 +1230,7 @@ def get_user(self, user_id) :
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
-##### Disadvantage(s) : cache-aside
+##### Disadvantage(s): cache-aside
* Each cache miss results in three trips, which can cause a noticeable delay.
* Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
@@ -1253,25 +1253,25 @@ The application uses the cache as the main data store, reading and writing data
Application code:
```python
-set_user(12345, {"foo":"bar"})
+set_user(12345, {"foo":"bar"})
```
Cache code:
```python
-def set_user(user_id, values) :
- user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
- cache.set(user_id, user)
+def set_user(user_id, values):
+ user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
+ cache.set(user_id, user)
```
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
-##### Disadvantage(s) : write through
+##### Disadvantage(s): write through
* When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue.
* Most data written might never be read, which can be minimized with a TTL.
-#### Write-behind (write-back)
+#### Write-behind (write-back)
@@ -1284,7 +1284,7 @@ In write-behind, the application does the following:
* Add/update entry in cache
* Asynchronously write entry to the data store, improving write performance
-##### Disadvantage(s) : write-behind
+##### Disadvantage(s): write-behind
* There could be data loss if the cache goes down prior to its contents hitting the data store.
* It is more complex to implement write-behind than it is to implement cache-aside or write-through.
@@ -1301,24 +1301,24 @@ You can configure the cache to automatically refresh any recently accessed cache
Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
-##### Disadvantage(s) : refresh-ahead
+##### Disadvantage(s): refresh-ahead
* Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
-### Disadvantage(s) : cache
+### Disadvantage(s): cache
-* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
+* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms).
* Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
* Need to make application changes such as adding Redis or memcached.
### Source(s) and further reading
-* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
-* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
-* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
-* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
-* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
-* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
+* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
+* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
+* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
+* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
+* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
+* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
## Asynchronism
@@ -1340,32 +1340,32 @@ Message queues receive, hold, and deliver messages. If an operation is too slow
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
-**[Redis](https://redis.io/) ** is useful as a simple message broker but messages can be lost.
+**[Redis](https://redis.io/)** is useful as a simple message broker but messages can be lost.
-**[RabbitMQ](https://www.rabbitmq.com/) ** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
+**[RabbitMQ](https://www.rabbitmq.com/)** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
-**[Amazon SQS](https://aws.amazon.com/sqs/) ** is hosted but can have high latency and has the possibility of messages being delivered twice.
+**[Amazon SQS](https://aws.amazon.com/sqs/)** is hosted but can have high latency and has the possibility of messages being delivered twice.
### Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
-**[Celery](https://docs.celeryproject.org/en/stable/) ** has support for scheduling and primarily has python support.
+**[Celery](https://docs.celeryproject.org/en/stable/)** has support for scheduling and primarily has python support.
### Back pressure
-If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
+If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff).
-### Disadvantage(s) : asynchronism
+### Disadvantage(s): asynchronism
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
### Source(s) and further reading
-* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
-* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
-* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
-* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
+* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
+* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
+* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
+* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## Communication
@@ -1375,11 +1375,11 @@ If queues start to grow significantly, the queue size can become larger than mem
Source: OSI 7 layer model
-### Hypertext transfer protocol (HTTP)
+### Hypertext transfer protocol (HTTP)
HTTP is a method for encoding and transporting data between a client and a server. It is a request/response protocol: clients issue requests and servers issue responses with relevant content and completion status info about the request. HTTP is self-contained, allowing requests and responses to flow through many intermediate routers and servers that perform load balancing, caching, encryption, and compression.
-A basic HTTP request consists of a verb (method) and a resource (endpoint) . Below are common HTTP verbs:
+A basic HTTP request consists of a verb (method) and a resource (endpoint). Below are common HTTP verbs:
| Verb | Description | Idempotent* | Safe | Cacheable |
|---|---|---|---|---|
@@ -1395,11 +1395,11 @@ HTTP is an application layer protocol relying on lower-level protocols such as *
#### Source(s) and further reading: HTTP
-* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
-* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
-* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
+* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
+* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
+* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
-### Transmission control protocol (TCP)
+### Transmission control protocol (TCP)
-TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol) . Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking) . All packets sent are guaranteed to reach the destination in the original order and without corruption through:
+TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol). Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking). All packets sent are guaranteed to reach the destination in the original order and without corruption through:
* Sequence numbers and [checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) for each packet
* [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) packets and automatic retransmission
-If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) . These guarantees cause delays and generally result in less efficient transmission than UDP.
+If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control). These guarantees cause delays and generally result in less efficient transmission than UDP.
To ensure high throughput, web servers can keep a large number of TCP connections open, resulting in high memory usage. It can be expensive to have a large number of open connections between web server threads and say, a [memcached](https://memcached.org/) server. [Connection pooling](https://en.wikipedia.org/wiki/Connection_pool) can help in addition to switching to UDP where applicable.
@@ -1423,7 +1423,7 @@ Use TCP over UDP when:
* You need all of the data to arrive intact
* You want to automatically make a best estimate use of the network throughput
-### User datagram protocol (UDP)
+### User datagram protocol (UDP)
@@ -1445,14 +1445,14 @@ Use UDP over TCP when:
#### Source(s) and further reading: TCP and UDP
-* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
-* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
-* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
-* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
-* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
-* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
+* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
+* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
+* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
+* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
+* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
+* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
-### Remote procedure call (RPC)
+### Remote procedure call (RPC)