revert to origin

pull/519/head
Vu 2021-03-26 23:50:38 +07:00
parent c0531421c8
commit 9b92b8963b
45 changed files with 3384 additions and 3384 deletions

View File

@ -1,11 +1,11 @@
## Review the Contributing Guidelines
Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) .
Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md).
### Translations
See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md) . Verify you've:
See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md). Verify you've:
* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
* Prefixed the title with a language code
* Example: "ja: Fix ..."

View File

@ -7,14 +7,14 @@ Contributions are welcome!
## Bug Reports
For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues) .
For bug reports or requests [submit an issue](https://github.com/donnemartin/system-design-primer/issues).
## Pull Requests
The preferred way to contribute is to fork the
[main repository](https://github.com/donnemartin/system-design-primer) on GitHub.
1. Fork the [main repository](https://github.com/donnemartin/system-design-primer) . Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
1. Fork the [main repository](https://github.com/donnemartin/system-design-primer). Click on the 'Fork' button near the top of the page. This creates a copy of the code under your account on the GitHub server.
2. Clone this copy to your local disk:
@ -38,7 +38,7 @@ The preferred way to contribute is to fork the
### GitHub Pull Requests Docs
If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/) .
If you are not familiar with pull requests, review the [pull request docs](https://help.github.com/articles/using-pull-requests/).
## Translations
@ -48,7 +48,7 @@ We'd like for the guide to be available in many languages. Here is the process f
* Translations follow the content of the original. Contributors must speak at least some English, so that translations do not diverge.
* Each translation has a maintainer to update the translation as the original evolves and to review others' changes. This doesn't require a lot of time, but a review by the maintainer is important to maintain quality.
See [Translations](TRANSLATIONS.md) .
See [Translations](TRANSLATIONS.md).
### Changes to translations
@ -56,7 +56,7 @@ See [Translations](TRANSLATIONS.md) .
* Changes that improve translations should be made directly on the file for that language. Pull requests should only modify one language at a time.
* Submit a pull request with changes to the file in that language. Each language has a maintainer, who reviews changes in that language. Then the primary maintainer [@donnemartin](https://github.com/donnemartin) merges it in.
* Prefix pull requests and issues with language codes if they are for that translation only, e.g. "es: Improve grammar", so maintainers can find them easily.
* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md) .
* Tag the translation maintainer for a code review, see the list of [translation maintainers](TRANSLATIONS.md).
* You will need to get a review from a native speaker (preferably the language maintainer) before your pull request is merged.
### Adding translations to new languages
@ -64,9 +64,9 @@ See [Translations](TRANSLATIONS.md) .
Translations to new languages are always welcome! Keep in mind a transation must be maintained.
* Do you have time to be a maintainer for a new language? Please see the list of [translations](TRANSLATIONS.md) and tell us so we know we can count on you in the future.
* Check the [translations](TRANSLATIONS.md) , issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
* Check the [translations](TRANSLATIONS.md), issues, and pull requests to see if a translation is in progress or stalled. If it's in progress, offer to help. If it's stalled, consider becoming the maintainer if you can commit to it.
* If a translation has not yet been started, file an issue for your language so people know you are working on it and we'll coordinate. Confirm you are native level in the language and are willing to maintain the translation, so it's not orphaned.
* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/) , i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country) . For languages that have variations, use the shortest tag, such as `zh-Hant`.
* To get started, fork the repo, then submit a pull request to the main repo with the single file README-xx.md added, where xx is the language code. Use standard [IETF language tags](https://www.w3.org/International/articles/language-tags/), i.e. the same as is used by Wikipedia, *not* the code for a single country. These are usually just the two-letter lowercase code, for example, `fr` for French and `uk` for Ukrainian (not `ua`, which is for the country). For languages that have variations, use the shortest tag, such as `zh-Hant`.
* Feel free to invite friends to help your original translation by having them fork your repo, then merging their pull requests to your forked repo. Translations are difficult and usually have errors that others need to find.
* Add links to your translation at the top of every README-XX.md file. For consistency, the link should be added in alphabetical order by ISO code, and the anchor text should be in the native language.
* When you've fully translated the English README.md, comment on the pull request in the main repo that it's ready to be merged.

View File

@ -1,6 +1,6 @@
I am providing code and resources in this repository to you under an open source
license. Because this is my personal repository, the license you receive to my
code and resources is from me and not my employer (Facebook) .
code and resources is from me and not my employer (Facebook).
Copyright 2017 Donne Martin

View File

@ -1,4 +1,4 @@
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
# システム設計入門
@ -35,11 +35,11 @@
面接準備に役立つその他のトピック:
* [学習指針](#学習指針)
* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
* [学習指針](#学習指針)
* [システム設計面接課題にどのように準備するか](#システム設計面接にどのようにして臨めばいいか)
* [システム設計課題例 **とその解答**](#システム設計課題例とその解答)
* [オブジェクト指向設計課題例、 **とその解答**](#オブジェクト指向設計問題と解答)
* [その他のシステム設計面接課題例](#他のシステム設計面接例題)
## 暗記カード
@ -50,24 +50,24 @@
この[Anki用フラッシュカードデッキ](https://apps.ankiweb.net/) は、間隔反復を活用して、システム設計のキーコンセプトの学習を支援します。
* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
* [システム設計デッキ](resources/flash_cards/System%20Design.apkg)
* [システム設計練習課題デッキ](resources/flash_cards/System%20Design%20Exercises.apkg)
* [オブジェクト指向練習課題デッキ](resources/flash_cards/OO%20Design.apkg)
外出先や移動中の勉強に役立つでしょう。
### コーディング技術課題用の問題: 練習用インタラクティブアプリケーション
コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
コード技術面接用の問題を探している場合は[**こちら**](https://github.com/donnemartin/interactive-coding-challenges)
<p align="center">
<img src="images/b4YtAEN.png">
<br/>
</p>
姉妹リポジトリの [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges) も見てみてください。追加の暗記デッキカードも入っています。
姉妹リポジトリの [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges)も見てみてください。追加の暗記デッキカードも入っています。
* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## コントリビュート
@ -78,11 +78,11 @@
* エラー修正
* セクション内容改善
* 新規セクション追加
* [翻訳する](https://github.com/donnemartin/system-design-primer/issues/28)
* [翻訳する](https://github.com/donnemartin/system-design-primer/issues/28)
現在、内容の改善が必要な作業中のコンテンツは[こちら](#進行中の作業) です。
現在、内容の改善が必要な作業中のコンテンツは[こちら](#進行中の作業)です。
コントリビュートの前に[Contributing Guidelines](CONTRIBUTING.md) を読みましょう。
コントリビュートの前に[Contributing Guidelines](CONTRIBUTING.md)を読みましょう。
## システム設計目次
@ -95,92 +95,92 @@
<br/>
</p>
* [システム設計トピック: まずはここから](#システム設計トピックス-まずはここから)
* [Step 1: スケーラビリティに関する動画を見る](#ステップ-1-スケーラビリティに関する動画を観て復習する)
* [Step 2: スケーラビリティに関する記事を読む](#ステップ-2-スケーラビリティに関する資料を読んで復習する)
* [次のステップ](#次のステップ)
* [パフォーマンス vs スケーラビリティ](#パフォーマンス-vs-スケーラビリティ)
* [レイテンシー vs スループット](#レイテンシー-vs-スループット)
* [可用性 vs 一貫性](#可用性-vs-一貫性)
* [CAP理論](#cap-理論)
* [CP - 一貫性(consistency) と分割性(partition) 耐性](#cp---一貫性と分断耐性consistency-and-partition-tolerance)
* [AP - 可用性(availability) と分割性(partition) 耐性](#ap---可用性と分断耐性availability-and-partition-tolerance)
* [一貫性 パターン](#一貫性パターン)
* [弱い一貫性](#弱い一貫性)
* [結果整合性](#結果整合性)
* [強い一貫性](#強い一貫性)
* [可用性 パターン](#可用性パターン)
* [フェイルオーバー](#フェイルオーバー)
* [レプリケーション](#レプリケーション)
* [ドメインネームシステム(DNS) ](#ドメインネームシステム)
* [コンテンツデリバリーネットワーク(CDN) ](#コンテンツデリバリーネットワークcontent-delivery-network)
* [プッシュCDN](#プッシュcdn)
* [プルCDN](#プルcdn)
* [ロードバランサー](#ロードバランサー)
* [アクティブ/パッシブ構成](#アクティブパッシブ)
* [アクティブ/アクティブ構成](#アクティブアクティブ)
* [Layer 4 ロードバランシング](#layer-4-ロードバランシング)
* [Layer 7 ロードバランシング](#layer-7-ロードバランシング)
* [水平スケーリング](#水平スケーリング)
* [リバースプロキシ (WEBサーバー) ](#リバースプロキシwebサーバー)
* [ロードバランサー vs リバースプロキシ](#ロードバランサー-vs-リバースプロキシ)
* [アプリケーションレイヤー](#アプリケーション層)
* [マイクロサービス](#マイクロサービス)
* [サービスディスカバリー](#service-discovery)
* [データベース](#データベース)
* [リレーショナルデータベースマネジメントシステム (RDBMS) ](#リレーショナルデータベースマネジメントシステム-rdbms)
* [マスター/スレーブ レプリケーション](#マスタースレーブ-レプリケーション)
* [マスター/マスター レプリケーション](#マスターマスター-レプリケーション)
* [フェデレーション](#federation)
* [シャーディング](#シャーディング)
* [デノーマライゼーション](#非正規化)
* [SQL チューニング](#sqlチューニング)
* [NoSQL](#nosql)
* [キー/バリューストア](#キーバリューストア)
* [ドキュメントストア](#ドキュメントストア)
* [ワイドカラムストア](#ワイドカラムストア)
* [グラフ データベース](#グラフデータベース)
* [SQL or NoSQL](#sqlかnosqlか)
* [キャッシュ](#キャッシュ)
* [クライアントキャッシング](#クライアントキャッシング)
* [CDNキャッシング](#cdnキャッシング)
* [Webサーバーキャッシング](#webサーバーキャッシング)
* [データベースキャッシング](#データベースキャッシング)
* [アプリケーションキャッシング](#アプリケーションキャッシング)
* [データベースクエリレベルでキャッシングする](#データベースクエリレベルでのキャッシング)
* [オブジェクトレベルでキャッシングする](#オブジェクトレベルでのキャッシング)
* [いつキャッシュを更新するのか](#いつキャッシュを更新するか)
* [キャッシュアサイド](#キャッシュアサイド)
* [ライトスルー](#ライトスルー)
* [ライトビハインド (ライトバック) ](#ライトビハインド-ライトバック)
* [リフレッシュアヘッド](#リフレッシュアヘッド)
* [非同期処理](#非同期処理)
* [メッセージキュー](#メッセージキュー)
* [タスクキュー](#タスクキュー)
* [バックプレッシャー](#バックプレッシャー)
* [通信](#通信)
* [伝送制御プロトコル (TCP) ](#伝送制御プロトコル-tcp)
* [ユーザデータグラムプロトコル (UDP) ](#ユーザデータグラムプロトコル-udp)
* [遠隔手続呼出 (RPC) ](#遠隔手続呼出-rpc)
* [Representational state transfer (REST) ](#representational-state-transfer-rest)
* [セキュリティ](#セキュリティ)
* [補遺](#補遺)
* [2の乗数表](#2の乗数表)
* [全てのプログラマーが知るべきレイテンシー値](#全てのプログラマーが知るべきレイテンシー値)
* [他のシステム設計面接例題](#他のシステム設計面接例題)
* [実世界でのアーキテクチャ](#実世界のアーキテクチャ)
* [各企業のアーキテクチャ](#各企業のアーキテクチャ)
* [企業のエンジニアブログ](#企業のエンジニアブログ)
* [作業中](#進行中の作業)
* [クレジット](#クレジット)
* [連絡情報](#contact-info)
* [ライセンス](#license)
* [システム設計トピック: まずはここから](#システム設計トピックス-まずはここから)
* [Step 1: スケーラビリティに関する動画を見る](#ステップ-1-スケーラビリティに関する動画を観て復習する)
* [Step 2: スケーラビリティに関する記事を読む](#ステップ-2-スケーラビリティに関する資料を読んで復習する)
* [次のステップ](#次のステップ)
* [パフォーマンス vs スケーラビリティ](#パフォーマンス-vs-スケーラビリティ)
* [レイテンシー vs スループット](#レイテンシー-vs-スループット)
* [可用性 vs 一貫性](#可用性-vs-一貫性)
* [CAP理論](#cap-理論)
* [CP - 一貫性(consistency)と分割性(partition)耐性](#cp---一貫性と分断耐性consistency-and-partition-tolerance)
* [AP - 可用性(availability)と分割性(partition)耐性](#ap---可用性と分断耐性availability-and-partition-tolerance)
* [一貫性 パターン](#一貫性パターン)
* [弱い一貫性](#弱い一貫性)
* [結果整合性](#結果整合性)
* [強い一貫性](#強い一貫性)
* [可用性 パターン](#可用性パターン)
* [フェイルオーバー](#フェイルオーバー)
* [レプリケーション](#レプリケーション)
* [ドメインネームシステム(DNS)](#ドメインネームシステム)
* [コンテンツデリバリーネットワーク(CDN)](#コンテンツデリバリーネットワークcontent-delivery-network)
* [プッシュCDN](#プッシュcdn)
* [プルCDN](#プルcdn)
* [ロードバランサー](#ロードバランサー)
* [アクティブ/パッシブ構成](#アクティブパッシブ)
* [アクティブ/アクティブ構成](#アクティブアクティブ)
* [Layer 4 ロードバランシング](#layer-4-ロードバランシング)
* [Layer 7 ロードバランシング](#layer-7-ロードバランシング)
* [水平スケーリング](#水平スケーリング)
* [リバースプロキシ (WEBサーバー)](#リバースプロキシwebサーバー)
* [ロードバランサー vs リバースプロキシ](#ロードバランサー-vs-リバースプロキシ)
* [アプリケーションレイヤー](#アプリケーション層)
* [マイクロサービス](#マイクロサービス)
* [サービスディスカバリー](#service-discovery)
* [データベース](#データベース)
* [リレーショナルデータベースマネジメントシステム (RDBMS)](#リレーショナルデータベースマネジメントシステム-rdbms)
* [マスター/スレーブ レプリケーション](#マスタースレーブ-レプリケーション)
* [マスター/マスター レプリケーション](#マスターマスター-レプリケーション)
* [フェデレーション](#federation)
* [シャーディング](#シャーディング)
* [デノーマライゼーション](#非正規化)
* [SQL チューニング](#sqlチューニング)
* [NoSQL](#nosql)
* [キー/バリューストア](#キーバリューストア)
* [ドキュメントストア](#ドキュメントストア)
* [ワイドカラムストア](#ワイドカラムストア)
* [グラフ データベース](#グラフデータベース)
* [SQL or NoSQL](#sqlかnosqlか)
* [キャッシュ](#キャッシュ)
* [クライアントキャッシング](#クライアントキャッシング)
* [CDNキャッシング](#cdnキャッシング)
* [Webサーバーキャッシング](#webサーバーキャッシング)
* [データベースキャッシング](#データベースキャッシング)
* [アプリケーションキャッシング](#アプリケーションキャッシング)
* [データベースクエリレベルでキャッシングする](#データベースクエリレベルでのキャッシング)
* [オブジェクトレベルでキャッシングする](#オブジェクトレベルでのキャッシング)
* [いつキャッシュを更新するのか](#いつキャッシュを更新するか)
* [キャッシュアサイド](#キャッシュアサイド)
* [ライトスルー](#ライトスルー)
* [ライトビハインド (ライトバック)](#ライトビハインド-ライトバック)
* [リフレッシュアヘッド](#リフレッシュアヘッド)
* [非同期処理](#非同期処理)
* [メッセージキュー](#メッセージキュー)
* [タスクキュー](#タスクキュー)
* [バックプレッシャー](#バックプレッシャー)
* [通信](#通信)
* [伝送制御プロトコル (TCP)](#伝送制御プロトコル-tcp)
* [ユーザデータグラムプロトコル (UDP)](#ユーザデータグラムプロトコル-udp)
* [遠隔手続呼出 (RPC)](#遠隔手続呼出-rpc)
* [Representational state transfer (REST)](#representational-state-transfer-rest)
* [セキュリティ](#セキュリティ)
* [補遺](#補遺)
* [2の乗数表](#2の乗数表)
* [全てのプログラマーが知るべきレイテンシー値](#全てのプログラマーが知るべきレイテンシー値)
* [他のシステム設計面接例題](#他のシステム設計面接例題)
* [実世界でのアーキテクチャ](#実世界のアーキテクチャ)
* [各企業のアーキテクチャ](#各企業のアーキテクチャ)
* [企業のエンジニアブログ](#企業のエンジニアブログ)
* [作業中](#進行中の作業)
* [クレジット](#クレジット)
* [連絡情報](#contact-info)
* [ライセンス](#license)
## 学習指針
> 学習スパンに応じてみるべきトピックス (short, medium, long)
> 学習スパンに応じてみるべきトピックス (short, medium, long)
![Imgur](images/OfVllex.png)
![Imgur](images/OfVllex.png)
**Q: 面接のためには、ここにあるものすべてをやらないといけないのでしょうか?**
@ -216,7 +216,7 @@
> システム設計面接試験問題にどのように取り組むか
システム設計面接は **open-ended conversation(Yes/Noでは答えられない口頭質問) です**。 自分で会話を組み立てることを求められます。
システム設計面接は **open-ended conversation(Yes/Noでは答えられない口頭質問)です**。 自分で会話を組み立てることを求められます。
以下のステップに従って議論を組み立てることができるでしょう。この過程を確かなものにするために、次のセクション[システム設計課題例とその解答](#system-design-interview-questions-with-solutions) を以下の指針に従って読み込むといいでしょう。
@ -242,10 +242,10 @@
### ステップ 3: 核となるコンポーネントを設計する
それぞれの主要なコンポーネントについての詳細を学ぶ。例えば、[url短縮サービス](solutions/system_design/pastebin/README.md) の設計を問われた際には次のようにするといいでしょう:
それぞれの主要なコンポーネントについての詳細を学ぶ。例えば、[url短縮サービス](solutions/system_design/pastebin/README.md)の設計を問われた際には次のようにするといいでしょう:
* 元のURLのハッシュ化したものを作り、それを保存する
* [MD5](solutions/system_design/pastebin/README.md) と [Base62](solutions/system_design/pastebin/README.md)
* [MD5](solutions/system_design/pastebin/README.md) と [Base62](solutions/system_design/pastebin/README.md)
* ハッシュ衝突
* SQL もしくは NoSQL
* データベーススキーマ
@ -262,23 +262,23 @@
* キャッシング
* データベースシャーディング
取りうる解決策とそのトレードオフについて議論をしよう。全てのことはトレードオフの関係にある。ボトルネックについては[スケーラブルなシステム設計の原理](#システム設計目次) を読むといいでしょう。
取りうる解決策とそのトレードオフについて議論をしよう。全てのことはトレードオフの関係にある。ボトルネックについては[スケーラブルなシステム設計の原理](#システム設計目次)を読むといいでしょう。
### ちょっとした暗算問題
ちょっとした推計値を手計算ですることを求められることもあるかもしれません。[補遺](#補遺) の以下の項目が役に立つでしょう:
ちょっとした推計値を手計算ですることを求められることもあるかもしれません。[補遺](#補遺)の以下の項目が役に立つでしょう:
* [チラ裏計算でシステム設計する](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2の乗数表](#2の乗数表)
* [全てのプログラマーが知っておくべきレイテンシの参考値](#全てのプログラマーが知るべきレイテンシー値)
* [チラ裏計算でシステム設計する](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2の乗数表](#2の乗数表)
* [全てのプログラマーが知っておくべきレイテンシの参考値](#全てのプログラマーが知るべきレイテンシー値)
### 文献とその他の参考資料
以下のリンク先ページを見てどのような質問を投げかけられるか概要を頭に入れておきましょう:
* [システム設計面接で成功するには?](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [システム設計面接](http://www.hiredintech.com/system-design)
* [アーキテクチャ、システム設計面接への導入](https://www.youtube.com/watch?v=ZgdS0EUmn70)
* [システム設計面接で成功するには?](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [システム設計面接](http://www.hiredintech.com/system-design)
* [アーキテクチャ、システム設計面接への導入](https://www.youtube.com/watch?v=ZgdS0EUmn70)
## システム設計課題例とその解答
@ -289,7 +289,7 @@
| 問題 | |
|---|---|
| Pastebin.com (もしくは Bit.ly) を設計する| [解答](solutions/system_design/pastebin/README.md) |
| Twitterタイムライン (もしくはFacebookフィード) を設計する<br/>Twitter検索(もしくはFacebook検索) 機能を設計する | [解答](solutions/system_design/twitter/README.md) |
| Twitterタイムライン (もしくはFacebookフィード)を設計する<br/>Twitter検索(もしくはFacebook検索)機能を設計する | [解答](solutions/system_design/twitter/README.md) |
| ウェブクローラーを設計する | [解答](solutions/system_design/web_crawler/README.md) |
| Mint.comを設計する | [解答](solutions/system_design/mint/README.md) |
| SNSサービスのデータ構造を設計する | [解答](solutions/system_design/social_graph/README.md) |
@ -300,51 +300,51 @@
### Pastebin.com (もしくは Bit.ly) を設計する
[問題と解答を見る](solutions/system_design/pastebin/README.md)
[問題と解答を見る](solutions/system_design/pastebin/README.md)
![Imgur](images/4edXG0T.png)
![Imgur](images/4edXG0T.png)
### Twitterタイムライン&検索 (もしくはFacebookフィード&検索) を設計する
### Twitterタイムライン&検索 (もしくはFacebookフィード&検索)を設計する
[問題と解答を見る](solutions/system_design/twitter/README.md)
[問題と解答を見る](solutions/system_design/twitter/README.md)
![Imgur](images/jrUBAF7.png)
![Imgur](images/jrUBAF7.png)
### ウェブクローラーの設計
[問題と解答を見る](solutions/system_design/web_crawler/README.md)
[問題と解答を見る](solutions/system_design/web_crawler/README.md)
![Imgur](images/bWxPtQA.png)
![Imgur](images/bWxPtQA.png)
### Mint.comの設計
[問題と解答を見る](solutions/system_design/mint/README.md)
[問題と解答を見る](solutions/system_design/mint/README.md)
![Imgur](images/V5q57vU.png)
![Imgur](images/V5q57vU.png)
### SNSサービスのデータ構造を設計する
[問題と解答を見る](solutions/system_design/social_graph/README.md)
[問題と解答を見る](solutions/system_design/social_graph/README.md)
![Imgur](images/cdCv5g7.png)
![Imgur](images/cdCv5g7.png)
### 検索エンジンのキー/バリュー構造を設計する
[問題と解答を見る](solutions/system_design/query_cache/README.md)
[問題と解答を見る](solutions/system_design/query_cache/README.md)
![Imgur](images/4j99mhe.png)
![Imgur](images/4j99mhe.png)
### Amazonのカテゴリ毎の売り上げランキングを設計する
[問題と解答を見る](solutions/system_design/sales_rank/README.md)
[問題と解答を見る](solutions/system_design/sales_rank/README.md)
![Imgur](images/MzExP06.png)
![Imgur](images/MzExP06.png)
### AWS上で100万人規模のユーザーを捌くサービスを設計する
[問題と解答を見る](solutions/system_design/scaling_aws/README.md)
[問題と解答を見る](solutions/system_design/scaling_aws/README.md)
![Imgur](images/jj3A5N8.png)
![Imgur](images/jj3A5N8.png)
## オブジェクト指向設計問題と解答
@ -356,13 +356,13 @@
| 問題 | |
|---|---|
| ハッシュマップの設計 | [解答](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| LRUキャッシュの設計 | [解答](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| コールセンターの設計 | [解答](solutions/object_oriented_design/call_center/call_center.ipynb) |
| カードのデッキの設計 | [解答](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| 駐車場の設計 | [解答](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
| チャットサーバーの設計 | [解答](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| 円形配列の設計 | [Contribute](#contributing) |
| ハッシュマップの設計 | [解答](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| LRUキャッシュの設計 | [解答](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| コールセンターの設計 | [解答](solutions/object_oriented_design/call_center/call_center.ipynb) |
| カードのデッキの設計 | [解答](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| 駐車場の設計 | [解答](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
| チャットサーバーの設計 | [解答](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| 円形配列の設計 | [Contribute](#contributing) |
| オブジェクト指向システム設計問題を追加する | [Contribute](#contributing) |
## システム設計トピックス: まずはここから
@ -373,7 +373,7 @@
### ステップ 1: スケーラビリティに関する動画を観て復習する
[Harvardでのスケーラビリティの講義](https://www.youtube.com/watch?v=-W9F__D3oY4)
[Harvardでのスケーラビリティの講義](https://www.youtube.com/watch?v=-W9F__D3oY4)
* ここで触れられているトピックス:
* 垂直スケーリング
@ -385,13 +385,13 @@
### ステップ 2: スケーラビリティに関する資料を読んで復習する
[スケーラビリティ](http://www.lecloud.net/tagged/scalability/chrono)
[スケーラビリティ](http://www.lecloud.net/tagged/scalability/chrono)
* ここで触れられているトピックス:
* [クローン](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [データベース](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [キャッシュ](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [非同期](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
* [クローン](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [データベース](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [キャッシュ](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [非同期](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### 次のステップ
@ -416,8 +416,8 @@
### その他の参考資料、ページ
* [スケーラビリティについて](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [スケーラビリティ、可用性、安定性、パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [スケーラビリティについて](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [スケーラビリティ、可用性、安定性、パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## レイテンシー vs スループット
@ -429,7 +429,7 @@
### その他の参考資料、ページ
* [レイテンシー vs スループットを理解する](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
* [レイテンシー vs スループットを理解する](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## 可用性 vs 一貫性
@ -449,21 +449,21 @@
*ネットワークは信頼できないので、分断耐性は必ず保証しなければなりません。つまりソフトウェアシステムとしてのトレードオフは、一貫性を取るか、可用性を取るかを考えなければなりません。*
#### CP - 一貫性と分断耐性(consistency and partition tolerance)
#### CP - 一貫性と分断耐性(consistency and partition tolerance)
分断されたードからのレスポンスを待ち続けているとタイムアウトエラーに陥る可能性があります。CPはあなたのサービスがアトミックな読み書き不可分操作を必要とする際にはいい選択肢でしょう。
#### AP - 可用性と分断耐性(availability and partition tolerance)
#### AP - 可用性と分断耐性(availability and partition tolerance)
レスポンスはノード上にあるデータで最新のものを返します。つまり、最新版のデータが返されるとは限りません。分断が解消された後も、書き込みが反映されるのには時間がかかります。
[結果整合性](#結果整合性)  を求めるサービスの際にはAPを採用するのがいいでしょう。もしくは、外部エラーに関わらずシステムが稼働する必要がある際にも同様です。
[結果整合性](#結果整合性) を求めるサービスの際にはAPを採用するのがいいでしょう。もしくは、外部エラーに関わらずシステムが稼働する必要がある際にも同様です。
### その他の参考資料、ページ
* [CAP 理論を振り返る](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [平易な英語でのCAP 理論のイントロ](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP FAQ](https://github.com/henryr/cap-faq)
* [CAP 理論を振り返る](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [平易な英語でのCAP 理論のイントロ](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP FAQ](https://github.com/henryr/cap-faq)
## 一貫性パターン
@ -477,7 +477,7 @@
### 結果整合性
書き込みの後、読み取りは最終的にはその結果を読み取ることができる(ミリ秒ほど遅れてというのが一般的です) 。データは非同期的に複製されます。
書き込みの後、読み取りは最終的にはその結果を読み取ることができる(ミリ秒ほど遅れてというのが一般的です)。データは非同期的に複製されます。
このアプローチはDNSやメールシステムなどに採用されています。結果整合性は多くのリクエストを捌くサービスと相性がいいでしょう。
@ -489,7 +489,7 @@
### その他の参考資料、ページ
* [データセンター間でのトランザクション](http://snarfed.org/transactions_across_datacenters_io.html)
* [データセンター間でのトランザクション](http://snarfed.org/transactions_across_datacenters_io.html)
## 可用性パターン
@ -524,8 +524,8 @@
このトピックは [データベース](#データベース) セクションにおいてより詳細に解説されています:
* [マスター・スレーブ レプリケーション](#マスタースレーブ-レプリケーション)
* [マスター・マスター レプリケーション](#マスターマスター-レプリケーション)
* [マスター・スレーブ レプリケーション](#マスタースレーブ-レプリケーション)
* [マスター・マスター レプリケーション](#マスターマスター-レプリケーション)
## ドメインネームシステム
@ -537,16 +537,16 @@
ドメインネームシステム (DNS) は www.example.com などのドメインネームをIPアドレスへと翻訳します。
DNSは少数のオーソライズされたサーバーが上位に位置する階層的構造です。あなたのルーターもしくはISPは検索をする際にどのDNSサーバーに接続するかという情報を提供します。低い階層のDNSサーバーはその経路マップをキャッシュします。ただ、この情報は伝搬遅延によって陳腐化する可能性があります。DNSの結果はあなたのブラウザもしくはOSに一定期間[time to live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) に設定された期間)キャッシュされます。
DNSは少数のオーソライズされたサーバーが上位に位置する階層的構造です。あなたのルーターもしくはISPは検索をする際にどのDNSサーバーに接続するかという情報を提供します。低い階層のDNSサーバーはその経路マップをキャッシュします。ただ、この情報は伝搬遅延によって陳腐化する可能性があります。DNSの結果はあなたのブラウザもしくはOSに一定期間[time to live (TTL)](https://en.wikipedia.org/wiki/Time_to_live)に設定された期間)キャッシュされます。
* **NS record (name server) ** - あなたのドメイン・サブドメインでのDNSサーバーを特定します。
* **MX record (mail exchange) ** - メッセージを受け取るメールサーバーを特定します。
* **A record (address) ** - IPアドレスに名前をつけます。
* **CNAME (canonical) ** - 他の名前もしくは `CNAME` (example.com を www.example.com) もしくは `A` recordへと名前を指し示す。
* **NS record (name server)** - あなたのドメイン・サブドメインでのDNSサーバーを特定します。
* **MX record (mail exchange)** - メッセージを受け取るメールサーバーを特定します。
* **A record (address)** - IPアドレスに名前をつけます。
* **CNAME (canonical)** - 他の名前もしくは `CNAME` (example.com を www.example.com) もしくは `A` recordへと名前を指し示す。
[CloudFlare](https://www.cloudflare.com/dns/) や [Route 53](https://aws.amazon.com/route53/) などのサービスはマネージドDNSサービスを提供しています。いくつかのDNSサービスでは様々な手法を使ってトラフィックを捌くことができます:
* [加重ラウンドロビン](http://g33kinfo.com/info/archives/2657)
* [加重ラウンドロビン](http://g33kinfo.com/info/archives/2657)
* トラフィックがメンテナンス中のサーバーに行くのを防ぎます
* 様々なクラスターサイズに応じて調整します
* A/B テスト
@ -556,16 +556,16 @@ DNSは少数のオーソライズされたサーバーが上位に位置する
### 欠点: DNS
* 上記で示されているようなキャッシングによって緩和されているとはいえ、DNSサーバーへの接続には少し遅延が生じる。
* DNSサーバーは、[政府、ISP企業,そして大企業](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) に管理されているが、それらの管理は複雑である。
* DNSサービスは[DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/) の例で、IPアドレスなしにユーザーがTwitterなどにアクセスできなくなったように、攻撃を受ける可能性がある。
* DNSサーバーは、[政府、ISP企業,そして大企業](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729)に管理されているが、それらの管理は複雑である。
* DNSサービスは[DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/)の例で、IPアドレスなしにユーザーがTwitterなどにアクセスできなくなったように、攻撃を受ける可能性がある。
### その他の参考資料、ページ
* [DNS アーキテクチャ](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS 記事](https://support.dnsimple.com/categories/dns/)
* [DNS アーキテクチャ](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS 記事](https://support.dnsimple.com/categories/dns/)
## コンテンツデリバリーネットワーク(Content delivery network)
## コンテンツデリバリーネットワーク(Content delivery network)
<p align="center">
<img src="images/h9TAuGI.jpg">
@ -573,7 +573,7 @@ DNSは少数のオーソライズされたサーバーが上位に位置する
<i><a href=https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/>Source: Why use a CDN</a></i>
</p>
コンテンツデリバリーネットワーク(CDN) は世界中に配置されたプロキシサーバーのネットワークがユーザーに一番地理的に近いサーバーからコンテンツを配信するシステムのことです。AmazonのCloudFrontなどは例外的にダイナミックなコンテンツも配信しますが、一般的に、HTML/CSS/JS、写真、そして動画などの静的ファイルがCDNを通じて配信されます。そのサイトのDNSがクライアントにどのサーバーと交信するかという情報を伝えます。
コンテンツデリバリーネットワーク(CDN)は世界中に配置されたプロキシサーバーのネットワークがユーザーに一番地理的に近いサーバーからコンテンツを配信するシステムのことです。AmazonのCloudFrontなどは例外的にダイナミックなコンテンツも配信しますが、一般的に、HTML/CSS/JS、写真、そして動画などの静的ファイルがCDNを通じて配信されます。そのサイトのDNSがクライアントにどのサーバーと交信するかという情報を伝えます。
CDNを用いてコンテンツを配信することで以下の二つの理由でパフォーマンスが劇的に向上します:
@ -590,7 +590,7 @@ CDNを用いてコンテンツを配信することで以下の二つの理由
プルCDNでは一人目のユーザーがリクエストした時に、新しいコンテンツをサービスのサーバーから取得します。コンテンツは自分のサーバーに保存して、CDNを指すURLを書き換えます。結果として、CDNにコンテンツがキャッシュされるまではリクエスト処理が遅くなります。
[time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) はコンテンツがどれだけの期間キャッシュされるかを規定します。プルCDNはCDN 上でのストレージスペースを最小化しますが、有効期限が切れたファイルが更新前にプルされてしまうことで冗長なトラフィックに繋がってしまう可能性があります。
[time-to-live (TTL)](https://en.wikipedia.org/wiki/Time_to_live) はコンテンツがどれだけの期間キャッシュされるかを規定します。プルCDNはCDN 上でのストレージスペースを最小化しますが、有効期限が切れたファイルが更新前にプルされてしまうことで冗長なトラフィックに繋がってしまう可能性があります。
大規模なトラフィックのあるサイトではプルCDNが相性がいいでしょう。というのも、トラフィックの大部分は最近リクエストされ、CDNに残っているコンテンツにアクセスするものであることが多いからです。
@ -602,9 +602,9 @@ CDNを用いてコンテンツを配信することで以下の二つの理由
### その他の参考資料、ページ
* [グローバルに分散されたコンテンツデリバリーネットワーク](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [プッシュCDNとプルCDNの違い](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
* [グローバルに分散されたコンテンツデリバリーネットワーク](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [プッシュCDNとプルCDNの違い](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
## ロードバランサー
@ -635,13 +635,13 @@ CDNを用いてコンテンツを配信することで以下の二つの理由
* ランダム
* Least loaded
* セッション/クッキー
* [ラウンドロビンもしくは加重ラウンドロビン](http://g33kinfo.com/info/archives/2657)
* [Layer 4](#layer-4-ロードバランシング)
* [Layer 7](#layer-7-ロードバランシング)
* [ラウンドロビンもしくは加重ラウンドロビン](http://g33kinfo.com/info/archives/2657)
* [Layer 4](#layer-4-ロードバランシング)
* [Layer 7](#layer-7-ロードバランシング)
### Layer 4 ロードバランシング
Layer 4 ロードバランサーは [トランスポートレイヤー](#通信) を参照してどのようにリクエストを配分するか判断します。一般的に、トランスポートレイヤーとしては、ソース、送信先IPアドレス、ヘッダーに記述されたポート番号が含まれますが、パケットの中身のコンテンツは含みません。 Layer 4 ロードバランサーはネットワークパケットを上流サーバーへ届け、上流サーバーから配信することでネットワークアドレス変換 [Network Address Translation (NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) を実現します。
Layer 4 ロードバランサーは [トランスポートレイヤー](#通信) を参照してどのようにリクエストを配分するか判断します。一般的に、トランスポートレイヤーとしては、ソース、送信先IPアドレス、ヘッダーに記述されたポート番号が含まれますが、パケットの中身のコンテンツは含みません。 Layer 4 ロードバランサーはネットワークパケットを上流サーバーへ届け、上流サーバーから配信することでネットワークアドレス変換 [Network Address Translation (NAT)](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) を実現します。
### Layer 7 ロードバランシング
@ -657,7 +657,7 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
* 水平的にスケーリングしていくと、複雑さが増す上に、サーバーのクローニングが必要になる。
* サーバーはステートレスである必要がある: ユーザーに関連するセッションや、プロフィール写真などのデータを持ってはいけない
* セッションは一元的な[データベース](#データベース) (SQL、 NoSQL) などのデータストアにストアされるか [キャッシュ](#キャッシュ) (Redis、 Memcached) に残す必要があります。
* セッションは一元的な[データベース](#データベース) (SQL、 NoSQL)などのデータストアにストアされるか [キャッシュ](#キャッシュ) (Redis、 Memcached)に残す必要があります。
* キャッシュやデータベースなどの下流サーバーは上流サーバーがスケールアウトするにつれてより多くの同時接続を保たなければなりません。
### 欠点: ロードバランサー
@ -668,15 +668,15 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
### その他の参考資料、ページ
* [NGINX アーキテクチャ](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy アーキテクチャガイド](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [スケーラビリティ](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [NGINX アーキテクチャ](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy アーキテクチャガイド](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [スケーラビリティ](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
* [Layer 4 ロードバランシング](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [Layer 7 ロードバランシング](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
* [Layer 4 ロードバランシング](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [Layer 7 ロードバランシング](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
## リバースプロキシ(webサーバー)
## リバースプロキシ(webサーバー)
<p align="center">
<img src="images/n41Azff.png">
@ -714,10 +714,10 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
### その他の参考資料、ページ
* [リバースプロキシ vs ロードバランサー](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX アーキテクチャ](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy アーキテクチャ ガイド](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
* [リバースプロキシ vs ロードバランサー](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX アーキテクチャ](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy アーキテクチャ ガイド](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
## アプリケーション層
@ -731,17 +731,17 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
**単一責任の原則** では、小さい自律的なサービスが協調して動くように提唱しています。小さいサービスの小さいチームが急成長のためにより積極的な計画を立てられるようにするためです。
アプリケーション層は[非同期処理](#非同期処理) もサポートします。
アプリケーション層は[非同期処理](#非同期処理)もサポートします。
### マイクロサービス
独立してデプロイできる、小規模なモジュール様式である[マイクロサービス](https://en.wikipedia.org/wiki/Microservices) もこの議論に関係してくる技術でしょう。それぞれのサービスは独自のプロセスを処理し、明確で軽量なメカニズムで通信して、その目的とする機能を実現します。<sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
独立してデプロイできる、小規模なモジュール様式である[マイクロサービス](https://en.wikipedia.org/wiki/Microservices)もこの議論に関係してくる技術でしょう。それぞれのサービスは独自のプロセスを処理し、明確で軽量なメカニズムで通信して、その目的とする機能を実現します。<sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
例えばPinterestでは以下のようなマイクロサービスに分かれています。ユーザープロフィール、フォロワー、フィード、検索、写真アップロードなどです。
### サービスディスカバリー
[Consul](https://www.consul.io/docs/index.html) 、 [Etcd](https://coreos.com/etcd/docs/latest) 、 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) などのシステムでは、登録されているサービスの名前、アドレス、ポートの情報を監視することで、サービス同士が互いを見つけやすくしています。サービスの完全性の確認には [Health checks](https://www.consul.io/intro/getting-started/checks.html) が便利で、これには [HTTP](#hypertext-transfer-protocol-http) エンドポイントがよく使われます。 Consul と Etcd のいずれも組み込みの [key-value store](#キーバリューストア) を持っており、設定データや共有データなどのデータを保存しておくことに使われます。
[Consul](https://www.consul.io/docs/index.html)、 [Etcd](https://coreos.com/etcd/docs/latest)、 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) などのシステムでは、登録されているサービスの名前、アドレス、ポートの情報を監視することで、サービス同士が互いを見つけやすくしています。サービスの完全性の確認には [Health checks](https://www.consul.io/intro/getting-started/checks.html) が便利で、これには [HTTP](#hypertext-transfer-protocol-http) エンドポイントがよく使われます。 Consul と Etcd のいずれも組み込みの [key-value store](#キーバリューストア) を持っており、設定データや共有データなどのデータを保存しておくことに使われます。
### 欠点: アプリケーション層
@ -750,11 +750,11 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
### その他の参考資料、ページ
* [スケールするシステムアーキテクチャを設計するためのイントロ](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [システム設計インタビューを紐解く](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [サービス指向アーキテクチャ](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeperのイントロダクション](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [マイクロサービスを作るために知っておきたいこと](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
* [スケールするシステムアーキテクチャを設計するためのイントロ](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [システム設計インタビューを紐解く](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [サービス指向アーキテクチャ](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeperのイントロダクション](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [マイクロサービスを作るために知っておきたいこと](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## データベース
@ -764,11 +764,11 @@ Layer 7 ロードバランサーは [アプリケーションレイヤー](#通
<i><a href=https://www.youtube.com/watch?v=w95murBkYmU>Source: Scaling up to your first 10 million users</a></i>
</p>
### リレーショナルデータベースマネジメントシステム (RDBMS)
### リレーショナルデータベースマネジメントシステム (RDBMS)
SQLなどのリレーショナルデータベースはテーブルに整理されたデータの集合である。
**ACID** はリレーショナルデータベースにおける[トランザクション](https://en.wikipedia.org/wiki/Database_transaction) のプロパティの集合である
**ACID** はリレーショナルデータベースにおける[トランザクション](https://en.wikipedia.org/wiki/Database_transaction)のプロパティの集合である
* **不可分性** - それぞれのトランザクションはあるかないかのいずれかである
* **一貫性** - どんなトランザクションもデータベースをある確かな状態から次の状態に遷移させる。
@ -790,7 +790,7 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
##### 欠点: マスタースレーブ レプリケーション
* スレーブをマスターに昇格させるには追加のロジックが必要になる。
* マスタースレーブ レプリケーション、マスターマスター レプリケーションの **両方** の欠点は[欠点: レプリケーション](#欠点-マスタースレーブ-レプリケーション) を参照
* マスタースレーブ レプリケーション、マスターマスター レプリケーションの **両方** の欠点は[欠点: レプリケーション](#欠点-マスタースレーブ-レプリケーション)を参照
#### マスターマスター レプリケーション
@ -819,8 +819,8 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
##### その他の参考資料、ページ: レプリケーション
* [スケーラビリティ、 可用性、 スタビリティ パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [マルチマスター レプリケーション](https://en.wikipedia.org/wiki/Multi-master_replication)
* [スケーラビリティ、 可用性、 スタビリティ パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [マルチマスター レプリケーション](https://en.wikipedia.org/wiki/Multi-master_replication)
#### Federation
@ -836,12 +836,12 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
* 大規模な処理やテーブルを要するスキーマの場合、フェデレーションは効果的とは言えないでしょう。
* どのデータベースに読み書きをするのかを指定するアプリケーションロジックを更新しなければなりません。
* [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers) で二つのデータベースからのデータを連結するのはより複雑になるでしょう。
* [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers)で二つのデータベースからのデータを連結するのはより複雑になるでしょう。
* フェデレーションでは追加のハードウェアが必要になり、複雑性も増します。
##### その他の参考資料、ページ: federation
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=w95murBkYmU)
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=w95murBkYmU)
#### シャーディング
@ -853,7 +853,7 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
シャーディングでは異なるデータベースにそれぞれがデータのサブセット断片のみを持つようにデータを分割します。ユーザーデータベースを例にとると、ユーザー数が増えるにつれてクラスターにはより多くの断片が加えられることになります。
[federation](#federation) の利点に似ていて、シャーディングでは読み書きのトラフィックを減らし、レプリケーションを減らし、キャッシュヒットを増やすことができます。インデックスサイズも減らすことができます。一般的にはインデックスサイズを減らすと、パフォーマンスが向上しクエリ速度が速くなります。なにがしかのデータを複製する機能がなければデータロスにつながりますが、もし、一つのシャードが落ちても、他のシャードが動いていることになります。フェデレーションと同じく、単一の中央マスターが書き込みの処理をしなくても、並列で書き込みを処理することができ、スループットの向上が期待できます。
[federation](#federation)の利点に似ていて、シャーディングでは読み書きのトラフィックを減らし、レプリケーションを減らし、キャッシュヒットを増やすことができます。インデックスサイズも減らすことができます。一般的にはインデックスサイズを減らすと、パフォーマンスが向上しクエリ速度が速くなります。なにがしかのデータを複製する機能がなければデータロスにつながりますが、もし、一つのシャードが落ちても、他のシャードが動いていることになります。フェデレーションと同じく、単一の中央マスターが書き込みの処理をしなくても、並列で書き込みを処理することができ、スループットの向上が期待できます。
ユーザーテーブルをシャードする一般的な方法は、ユーザーのラストネームイニシャルでシャードするか、ユーザーの地理的配置でシャードするなどです。
@ -867,15 +867,15 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
##### その他の参考資料、ページ: シャーディング
* [シャードの登場](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [シャードの登場](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [シャードデータベースアーキテクチャ](https://en.wikipedia.org/wiki/Shard_(database_architecture))
* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### 非正規化
非正規化では、書き込みのパフォーマンスをいくらか犠牲にして読み込みのパフォーマンスを向上させようとします。計算的に重いテーブルの結合などをせずに、複数のテーブルに冗長なデータのコピーが書き込まれるのを許容します。いくつかのRDBMS例えば、[PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) やOracleはこの冗長な情報を取り扱い、一貫性を保つための[materialized views](https://en.wikipedia.org/wiki/Materialized_view) という機能をサポートしています。
[フェデレーション](#federation) や [シャーディング](#シャーディング) などのテクニックによってそれぞれのデータセンターに分配されたデータを合一させることはとても複雑な作業です。非正規化によってそのような複雑な処理をしなくて済むようになります。
[フェデレーション](#federation) や [シャーディング](#シャーディング)などのテクニックによってそれぞれのデータセンターに分配されたデータを合一させることはとても複雑な作業です。非正規化によってそのような複雑な処理をしなくて済むようになります。
多くのシステムで、100対1あるいは1000対1くらいになるくらい読み取りの方が、書き込みのトラフィックよりも多いことでしょう。読み込みを行うために、複雑なデータベースのジョイン処理が含まれるものは計算的に高価につきますし、ディスクの処理時間で膨大な時間を費消してしまうことになります。
@ -887,7 +887,7 @@ SQLなどのリレーショナルデータベースはテーブルに整理さ
###### その他の参考資料、ページ: 非正規化
* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
#### SQLチューニング
@ -895,7 +895,7 @@ SQLチューニングは広範な知識を必要とする分野で多くの [本
ボトルネックを明らかにし、シミュレートする上で、 **ベンチマーク** を定め、 **プロファイル** することはとても重要です。
* **ベンチマーク** - [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) などのツールを用いて、高負荷の状況をシミュレーションしてみましょう。
* **ベンチマーク** - [ab](http://httpd.apache.org/docs/2.2/programs/ab.html)などのツールを用いて、高負荷の状況をシミュレーションしてみましょう。
* **プロファイル** - [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) などのツールを用いて、パフォーマンス状況の確認をしましょう。
ベンチマークとプロファイルをとることで以下のような効率化の選択肢をとることになるでしょう。
@ -909,20 +909,20 @@ SQLチューニングは広範な知識を必要とする分野で多くの [本
* 2の32乗や40億以下を超えない程度の大きな数には INT を使いましょう。
* 通貨に関しては小数点表示上のエラーを避けるために `DECIMAL` を使いましょう。
* 大きな `BLOBS` を保存するのは避けましょう。どこからそのオブジェクトを取ってくることができるかの情報を保存しましょう。
* `VARCHAR(255) ` は8ビットで数えられる最大の文字数です。一部のDBMSでは、1バイトの利用効率を最大化するためにこの文字数がよく使われます。
* `VARCHAR(255)` は8ビットで数えられる最大の文字数です。一部のDBMSでは、1バイトの利用効率を最大化するためにこの文字数がよく使われます。
* [検索性能向上のため](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) 、可能であれば `NOT NULL` 制約を設定しましょう。
##### インデックスを効果的に用いる
* クエリ(`SELECT`、 `GROUP BY``ORDER BY``JOIN`) の対象となる列にインデックスを使うことで速度を向上できるかもしれません。
* インデックスは通常、平衡探索木である[B木](https://en.wikipedia.org/wiki/B-tree) の形で表されます。B木によりデータは常にソートされた状態になります。また検索、順次アクセス、挿入、削除を対数時間で行えます。
* インデックスは通常、平衡探索木である[B木](https://en.wikipedia.org/wiki/B-tree)の形で表されます。B木によりデータは常にソートされた状態になります。また検索、順次アクセス、挿入、削除を対数時間で行えます。
* インデックスを配置することはデータをメモリーに残すことにつながりより容量を必要とします。
* インデックスの更新も必要になるため書き込みも遅くなります。
* 大量のデータをロードする際には、インデックスを切ってからデータをロードして再びインデックスをビルドした方が速いことがあります。
##### 高負荷なジョインを避ける
* パフォーマンス上必要なところには[非正規化](#非正規化) を適用する
* パフォーマンス上必要なところには[非正規化](#非正規化)を適用する
##### テーブルのパーティション
@ -934,10 +934,10 @@ SQLチューニングは広範な知識を必要とする分野で多くの [本
##### その他の参考資料、ページ: SQLチューニング
* [MySQLクエリを最適化するためのTips](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
* [VARCHAR(255) をやたらよく見かけるのはなんで?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [null値はどのようにパフォーマンスに影響するのか](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
* [MySQLクエリを最適化するためのTips](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
* [VARCHAR(255)をやたらよく見かけるのはなんで?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [null値はどのようにパフォーマンスに影響するのか](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
@ -955,7 +955,7 @@ NoSQL は **key-value store**、 **document-store**、 **wide column store**、
> 概要: ハッシュテーブル
キーバリューストアでは一般的にO(1) の読み書きができ、それらはメモリないしSSDで裏付けられています。データストアはキーを [辞書的順序](https://en.wikipedia.org/wiki/Lexicographical_order) で保持することでキーの効率的な取得を可能にしています。キーバリューストアではメタデータを値とともに保持することが可能です。
キーバリューストアでは一般的にO(1)の読み書きができ、それらはメモリないしSSDで裏付けられています。データストアはキーを [辞書的順序](https://en.wikipedia.org/wiki/Lexicographical_order) で保持することでキーの効率的な取得を可能にしています。キーバリューストアではメタデータを値とともに保持することが可能です。
キーバリューストアはハイパフォーマンスな挙動が可能で、単純なデータモデルやインメモリーキャッシュレイヤーなどのデータが急速に変わる場合などに使われます。単純な処理のみに機能が制限されているので、追加の処理機能が必要な場合にはその複雑性はアプリケーション層に載せることになります。
@ -963,16 +963,16 @@ NoSQL は **key-value store**、 **document-store**、 **wide column store**、
##### その他の参考資料、ページ: キーバリューストア
* [キーバリューデータベース](https://en.wikipedia.org/wiki/Key-value_database)
* [キーバリューストアの欠点](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redisアーキテクチャ](http://qnimate.com/overview-of-redis-architecture/)
* [メムキャッシュアーキテクチャ](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
* [キーバリューデータベース](https://en.wikipedia.org/wiki/Key-value_database)
* [キーバリューストアの欠点](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redisアーキテクチャ](http://qnimate.com/overview-of-redis-architecture/)
* [メムキャッシュアーキテクチャ](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### ドキュメントストア
> 概要: ドキュメントがバリューとして保存されたキーバリューストア
ドキュメントストアはオブジェクトに関する全ての情報を持つドキュメント(XML、 JSON、 binaryなど) を中心に据えたシステムです。ドキュメントストアでは、ドキュメント自身の内部構造に基づいた、APIもしくはクエリ言語を提供します。 *メモ:多くのキーバリューストアでは、値のメタデータを扱う機能を含んでいますが、そのことによって二つドキュメントストアとの境界線が曖昧になってしまっています。*
ドキュメントストアはオブジェクトに関する全ての情報を持つドキュメント(XML、 JSON、 binaryなど)を中心に据えたシステムです。ドキュメントストアでは、ドキュメント自身の内部構造に基づいた、APIもしくはクエリ言語を提供します。 *メモ:多くのキーバリューストアでは、値のメタデータを扱う機能を含んでいますが、そのことによって二つドキュメントストアとの境界線が曖昧になってしまっています。*
以上のことを実現するために、ドキュメントはコレクション、タグ、メタデータやディレクトリなどとして整理されています。ドキュメント同士はまとめてグループにできるものの、それぞれで全く異なるフィールドを持つ可能性があります。
@ -982,10 +982,10 @@ NoSQL は **key-value store**、 **document-store**、 **wide column store**、
##### その他の参考資料、ページ: ドキュメントストア
* [ドキュメント指向 データベース](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB アーキテクチャ](https://www.mongodb.com/mongodb-architecture)
* [CouchDB アーキテクチャ](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch アーキテクチャ](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
* [ドキュメント指向 データベース](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB アーキテクチャ](https://www.mongodb.com/mongodb-architecture)
* [CouchDB アーキテクチャ](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch アーキテクチャ](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### ワイドカラムストア
@ -999,16 +999,16 @@ NoSQL は **key-value store**、 **document-store**、 **wide column store**、
ワイドカラムストアのデータの基本単位はカラムネーム・バリューのペアです。それぞれのカラムはカラムファミリーとしてSQLテーブルのようにグループ化することができます。スーパーカラムファミリーはカラムファミリーの集合です。それぞれのカラムには行キーでアクセスすることができます。同じ行キーを持つカラムは同じ行として認識されます。それぞれの値は、バージョン管理とコンフリクトが起きた時のために、タイムスタンプを含みます。
Googleは[Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) を初のワイドカラムストアとして発表しました。それがオープンソースでHadoopなどでよく使われる[HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) やFacebookによる[Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html) などのプロジェクトに影響を与えました。BigTable、HBaseやCassandraなどのストアはキーを辞書形式で保持することで選択したキーレンジでのデータ取得を効率的にします。
Googleは[Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)を初のワイドカラムストアとして発表しました。それがオープンソースでHadoopなどでよく使われる[HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) やFacebookによる[Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html) などのプロジェクトに影響を与えました。BigTable、HBaseやCassandraなどのストアはキーを辞書形式で保持することで選択したキーレンジでのデータ取得を効率的にします。
ワイドカラムストアは高い可用性とスケーラビリティを担保します。これらはとても大規模なデータセットを扱うことによく使われます。
##### その他の参考資料、ページ: ワイドカラムストア
* [SQL & NoSQL簡単に歴史をさらう](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable アーキテクチャ](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase アーキテクチャ](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
* [Cassandra アーキテクチャ](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
* [SQL & NoSQL簡単に歴史をさらう](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable アーキテクチャ](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase アーキテクチャ](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
* [Cassandra アーキテクチャ](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
#### グラフデータベース
@ -1022,21 +1022,21 @@ Googleは[Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/cha
グラフデータベースでは、それぞれのノードがレコードで、それぞれのアークは二つのノードを繋ぐ関係性として定義されます。グラフデータベースは多数の外部キーや多対多などの複雑な関係性を表すのに最適です。
グラフデータベースはSNSなどのサービスの複雑な関係性モデルなどについて高いパフォーマンスを発揮します。比較的新しく、まだ一般的には用いられていないので、開発ツールやリソースを探すのが他の方法に比べて難しいかもしれません。多くのグラフは[REST APIs](#representational-state-transfer-rest) を通じてのみアクセスできます。
グラフデータベースはSNSなどのサービスの複雑な関係性モデルなどについて高いパフォーマンスを発揮します。比較的新しく、まだ一般的には用いられていないので、開発ツールやリソースを探すのが他の方法に比べて難しいかもしれません。多くのグラフは[REST APIs](#representational-state-transfer-rest)を通じてのみアクセスできます。
##### その他の参考資料、ページ: グラフ
* [Graphデータベース](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
* [Graphデータベース](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### その他の参考資料、ページ: NoSQL
* [基本用語の説明](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQLデータベースについて調査と選択ガイド](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [スケーラビリティ](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [NoSQLのイントロダクション](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQLパターン](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
* [基本用語の説明](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQLデータベースについて調査と選択ガイド](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [スケーラビリティ](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [NoSQLのイントロダクション](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQLパターン](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQLかNoSQLか
@ -1077,8 +1077,8 @@ NoSQLに適するサンプルデータ:
##### その他の参考資料、ページ:  SQLもしくはNoSQL
* [最初の1000万ユーザーにスケールアップするために](https://www.youtube.com/watch?v=w95murBkYmU)
* [SQLとNoSQLの違い](https://www.sitepoint.com/sql-vs-nosql-differences/)
* [最初の1000万ユーザーにスケールアップするために](https://www.youtube.com/watch?v=w95murBkYmU)
* [SQLとNoSQLの違い](https://www.sitepoint.com/sql-vs-nosql-differences/)
## キャッシュ
@ -1110,7 +1110,7 @@ NoSQLに適するサンプルデータ:
### アプリケーションキャッシング
メムキャッシュなどのIn-memoryキャッシュやRedisはアプリケーションとデータストレージの間のキーバリューストアです。データはRAMで保持されるため、データがディスクで保存される一般的なデータベースよりもだいぶ速いです。RAM容量はディスクよりも限られているので、[least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) などの[cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) アルゴリズムが 'コールド' なエントリを弾き、'ホット' なデータをRAMに保存します。
メムキャッシュなどのIn-memoryキャッシュやRedisはアプリケーションとデータストレージの間のキーバリューストアです。データはRAMで保持されるため、データがディスクで保存される一般的なデータベースよりもだいぶ速いです。RAM容量はディスクよりも限られているので、[least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used)などの[cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) アルゴリズムが 'コールド' なエントリを弾き、'ホット' なデータをRAMに保存します。
Redisはさらに以下のような機能を備えています:
@ -1167,12 +1167,12 @@ Redisはさらに以下のような機能を備えています:
* エントリを返します
```python
def get_user(self, user_id) :
user = cache.get("user.{0}", user_id)
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@ -1184,7 +1184,7 @@ def get_user(self, user_id) :
##### 欠点: キャッシュアサイド
* 各キャッシュミスは三つのトリップを呼び出すことになり、体感できるほどの遅延が起きてしまいます。
* データベースのデータが更新されるとキャッシュデータは古いものになってしまいます。time-to-live (TTL) を設定することでキャッシュエントリの更新を強制的に行う、もしくはライトスルーを採用することでこの問題は緩和できます。
* データベースのデータが更新されるとキャッシュデータは古いものになってしまいます。time-to-live (TTL)を設定することでキャッシュエントリの更新を強制的に行う、もしくはライトスルーを採用することでこの問題は緩和できます。
* ノードが落ちると、新規の空のノードで代替されることでレイテンシーが増加することになります。
#### ライトスルー
@ -1204,15 +1204,15 @@ def get_user(self, user_id) :
アプリケーションコード:
```
set_user(12345, {"foo":"bar"})
set_user(12345, {"foo":"bar"})
```
キャッシュコード:
```python
def set_user(user_id, values) :
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
```
ライトスルーは書き込み処理のせいで全体としては遅いオペレーションですが、書き込まれたばかりのデータに関する読み込みは速いです。ユーザー側は一般的にデータ更新時の方が読み込み時よりもレイテンシーに許容的です。キャッシュ内のデータは最新版で保たれます。
@ -1222,7 +1222,7 @@ def set_user(user_id, values) :
* ノードが落ちたこと、もしくはスケーリングによって新しいノードが作成された時に、新しいノードはデータベース内のエントリーが更新されるまではエントリーをキャッシュしません。キャッシュアサイドとライトスルーを併用することでこの問題を緩和できます。
* 書き込まれたデータの大部分は一度も読み込まれることはありません。このデータはTTLによって圧縮することができます。
#### ライトビハインド (ライトバック)
#### ライトビハインド (ライトバック)
<p align="center">
<img src="images/rgSrvjG.png">
@ -1258,18 +1258,18 @@ def set_user(user_id, values) :
### 欠点: キャッシュ
* [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) などを用いて、データベースなどの真のデータとキャッシュの間の一貫性を保つ必要があります。
* [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms)などを用いて、データベースなどの真のデータとキャッシュの間の一貫性を保つ必要があります。
* Redisやmemcachedを追加することでアプリケーション構成を変更する必要があります。
* Cache invalidationも難しいですがそれに加えて、いつキャッシュを更新するかという複雑な問題にも悩まされることになります。
### その他の参考資料、ページ
* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [スケーラブルなシステムデザインパターン](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [スケールできるシステムを設計するためのイントロダクション](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [スケーラビリティ、可用性、安定性、パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [スケーラビリティ](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCacheのストラテジー](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [スケーラブルなシステムデザインパターン](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [スケールできるシステムを設計するためのイントロダクション](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [スケーラビリティ、可用性、安定性、パターン](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [スケーラビリティ](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCacheのストラテジー](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
## 非同期処理
@ -1305,7 +1305,7 @@ def set_user(user_id, values) :
### バックプレッシャー
もし、キューが拡大しすぎると、メモリーよりもキューの方が大きくなりキャッシュミスが起こり、ディスク読み出しにつながり、パフォーマンスが低下することにつながります。[バックプレッシャー](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) はキューサイズを制限することで回避することができ、高いスループットを確保しキューにすでにあるジョブについてのレスポンス時間を短縮できます。キューがいっぱいになると、クライアントはサーバービジーもしくはHTTP 503をレスポンスとして受け取りまた後で時間をおいてアクセスするようにメッセージを受け取ります。クライアントは[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) などによって後ほど再度時間を置いてリクエストすることができます。
もし、キューが拡大しすぎると、メモリーよりもキューの方が大きくなりキャッシュミスが起こり、ディスク読み出しにつながり、パフォーマンスが低下することにつながります。[バックプレッシャー](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)はキューサイズを制限することで回避することができ、高いスループットを確保しキューにすでにあるジョブについてのレスポンス時間を短縮できます。キューがいっぱいになると、クライアントはサーバービジーもしくはHTTP 503をレスポンスとして受け取りまた後で時間をおいてアクセスするようにメッセージを受け取ります。クライアントは[exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff)などによって後ほど再度時間を置いてリクエストすることができます。
### 欠点: 非同期処理
@ -1313,10 +1313,10 @@ def set_user(user_id, values) :
### その他の参考資料、ページ
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [オーバーロードした時にバックプレッシャーを適用する](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
* [メッセージキューとタスクキューの違いとは?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [オーバーロードした時にバックプレッシャーを適用する](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
* [メッセージキューとタスクキューの違いとは?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## 通信
@ -1326,11 +1326,11 @@ def set_user(user_id, values) :
<i><a href=http://www.escotal.com/osilayer.html>Source: OSI 7 layer model</a></i>
</p>
### Hypertext transfer protocol (HTTP)
### Hypertext transfer protocol (HTTP)
HTTP はクライアントとサーバー間でのデータをエンコードして転送するための手法です。リクエスト・レスポンスに関わるプロトコルです。クライアントがリクエストをサーバーに投げ、サーバーがリクエストに関係するコンテンツと完了ステータス情報をレスポンスとして返します。HTTPは自己完結するので、間にロードバランサー、キャッシュ、エンクリプション、圧縮などのどんな中間ルーターが入っても動くようにできています。
基本的なHTTPリクエストはHTTP動詞(メソッド) とリソース(エンドポイント) で成り立っています。以下がよくあるHTTP動詞です。:
基本的なHTTPリクエストはHTTP動詞(メソッド)とリソース(エンドポイント)で成り立っています。以下がよくあるHTTP動詞です。:
| 動詞 | 詳細 | 冪等性* | セーフ | キャッシュできるか |
|---|---|---|---|---|
@ -1346,11 +1346,11 @@ HTTPは**TCP** や **UDP** などの低級プロトコルに依存している
#### その他の参考資料、ページ: HTTP
* [HTTPってなに?](https://www.nginx.com/resources/glossary/http/)
* [HTTP と TCPの違い](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT と PATCHの違い](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
* [HTTPってなに?](https://www.nginx.com/resources/glossary/http/)
* [HTTP と TCPの違い](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT と PATCHの違い](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
### 伝送制御プロトコル (TCP)
### 伝送制御プロトコル (TCP)
<p align="center">
<img src="images/JdAsdvG.jpg">
@ -1358,14 +1358,14 @@ HTTPは**TCP** や **UDP** などの低級プロトコルに依存している
<i><a href=http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/>Source: How to make a multiplayer game</a></i>
</p>
TCPは[IP network](https://en.wikipedia.org/wiki/Internet_Protocol) の上で成り立つ接続プロトコルです。接続は[handshake](https://en.wikipedia.org/wiki/Handshaking) によって開始、解除されます。全ての送信されたパケットは欠損なしで送信先に送信された順番で到達するように以下の方法で保証されています:
TCPは[IP network](https://en.wikipedia.org/wiki/Internet_Protocol)の上で成り立つ接続プロトコルです。接続は[handshake](https://en.wikipedia.org/wiki/Handshaking)によって開始、解除されます。全ての送信されたパケットは欠損なしで送信先に送信された順番で到達するように以下の方法で保証されています:
* シーケンス番号と[checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) が全てのパケットに用意されている
* シーケンス番号と[checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation)が全てのパケットに用意されている
* [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks))パケットと自動再送信
もし送信者が正しいレスポンスを受け取らなかったとき、パケットを再送信します。複数のタイムアウトがあったとき、接続は解除されます。TCP は[フロー制御](https://en.wikipedia.org/wiki/Flow_control_(data)) と [輻輳制御](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) も実装しています。これらの機能によって速度は低下し、一般的にUDPよりも非効率な転送手段になっています。
もし送信者が正しいレスポンスを受け取らなかったとき、パケットを再送信します。複数のタイムアウトがあったとき、接続は解除されます。TCP は[フロー制御](https://en.wikipedia.org/wiki/Flow_control_(data)) と [輻輳制御](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control)も実装しています。これらの機能によって速度は低下し、一般的にUDPよりも非効率な転送手段になっています。
ハイスループットを実現するために、ウェブサーバーはかなり大きな数のTCP接続を開いておくことがあり、そのことでメモリー使用が圧迫されます。ウェブサーバスレッドと例えば[memcached](#memcached) サーバーの間で多数のコネクションを保っておくことは高くつくかもしれません。可能なところではUDPに切り替えるだけでなく[コネクションプーリング](https://en.wikipedia.org/wiki/Connection_pool) なども役立つかもしれません。
ハイスループットを実現するために、ウェブサーバーはかなり大きな数のTCP接続を開いておくことがあり、そのことでメモリー使用が圧迫されます。ウェブサーバスレッドと例えば[memcached](#memcached) サーバーの間で多数のコネクションを保っておくことは高くつくかもしれません。可能なところではUDPに切り替えるだけでなく[コネクションプーリング](https://en.wikipedia.org/wiki/Connection_pool)なども役立つかもしれません。
TCPは高い依存性を要し、時間制約が厳しくないものに適しているでしょう。ウェブサーバー、データベース情報、SMTP、FTPやSSHなどの例に適用されます。
@ -1374,7 +1374,7 @@ TCPは高い依存性を要し、時間制約が厳しくないものに適し
* 全てのデータが欠損することなしに届いてほしい
* ネットワークスループットの最適な自動推測をしてオペレーションしたい
### ユーザデータグラムプロトコル (UDP)
### ユーザデータグラムプロトコル (UDP)
<p align="center">
<img src="images/yzDrJtA.jpg">
@ -1396,14 +1396,14 @@ TCPよりもUDPを使うのは:
#### その他の参考資料、ページ: TCP と UDP
* [ゲームプログラミングのためのネットワーク](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP と UDP プロトコルの主な違い](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP と UDPの違い](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Facebookのメムキャッシュスケーリング](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
* [ゲームプログラミングのためのネットワーク](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP と UDP プロトコルの主な違い](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP と UDPの違い](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Facebookのメムキャッシュスケーリング](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
### 遠隔手続呼出 (RPC)
### 遠隔手続呼出 (RPC)
<p align="center">
<img src="images/iF4Mkb5.png">
@ -1411,7 +1411,7 @@ TCPよりもUDPを使うのは:
<i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>Source: Crack the system design interview</a></i>
</p>
RPCではクライアントがリモートサーバーなどの異なるアドレス空間でプロシージャーが処理されるようにします。プロシージャーはローカルでのコールのように、クライアントからサーバーにどのように通信するかという詳細を省いた状態でコードが書かれます。リモートのコールは普通、ローカルのコールよりも遅く、信頼性に欠けるため、RPCコールをローカルコールと区別させておくことが好ましいでしょう。人気のRPCフレームワークは以下です。[Protobuf](https://developers.google.com/protocol-buffers/) 、 [Thrift](https://thrift.apache.org/) 、[Avro](https://avro.apache.org/docs/current/)
RPCではクライアントがリモートサーバーなどの異なるアドレス空間でプロシージャーが処理されるようにします。プロシージャーはローカルでのコールのように、クライアントからサーバーにどのように通信するかという詳細を省いた状態でコードが書かれます。リモートのコールは普通、ローカルのコールよりも遅く、信頼性に欠けるため、RPCコールをローカルコールと区別させておくことが好ましいでしょう。人気のRPCフレームワークは以下です。[Protobuf](https://developers.google.com/protocol-buffers/)、 [Thrift](https://thrift.apache.org/)、[Avro](https://avro.apache.org/docs/current/)
RPC は リクエストレスポンスプロトコル:
@ -1450,18 +1450,18 @@ RPCは振る舞いを公開することに焦点を当てています。RPCは
* RPCクライアントとはサービス実装により厳密に左右されることになります。
* 新しいオペレーション、使用例があるたびに新しくAPIが定義されなければなりません。
* RPCをデバッグするのは難しい可能性があります。
* 既存のテクノロジーをそのまま使ってサービスを構築することはできないかもしれません。例えば、[Squid](http://www.squid-cache.org/) などのサーバーに[RPCコールが正しくキャッシュ](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) されるように追加で骨を折る必要があるかもしれません。
* 既存のテクノロジーをそのまま使ってサービスを構築することはできないかもしれません。例えば、[Squid](http://www.squid-cache.org/)などのサーバーに[RPCコールが正しくキャッシュ](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) されるように追加で骨を折る必要があるかもしれません。
### Representational state transfer (REST)
### Representational state transfer (REST)
RESTは、クライアントがサーバーによってマネージされるリソースに対して処理を行うクライアント・サーバーモデルを支持するアーキテキチャスタイルです。サーバーは操作できるもしくは新しいリソースレプレゼンテーションを受け取ることができるようなリソースやアクションのレプレゼンテーションを提供します。すべての通信はステートレスでキャッシュ可能でなければなりません。
RESTful なインターフェースには次の四つの特徴があります:
* **特徴的なリソース (URI in HTTP) ** - どのオペレーションであっても同じURIを使う。
* **HTTP動詞によって変わる (Verbs in HTTP) ** - 動詞、ヘッダー、ボディを使う
* **自己説明的なエラーメッセージ (status response in HTTP) ** - ステータスコードを使い、新しく作ったりしないこと。
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP) ** - 自分のwebサービスがブラウザで完全にアクセスできること。
* **特徴的なリソース (URI in HTTP)** - どのオペレーションであっても同じURIを使う。
* **HTTP動詞によって変わる (Verbs in HTTP)** - 動詞、ヘッダー、ボディを使う
* **自己説明的なエラーメッセージ (status response in HTTP)** - ステータスコードを使い、新しく作ったりしないこと。
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP)** - 自分のwebサービスがブラウザで完全にアクセスできること。
サンプル REST コール:
@ -1472,7 +1472,7 @@ PUT /someresources/anId
{"anotherdata": "another value"}
```
RESTはデータを公開することに焦点を当てています。クライアントとサーバーのカップリングを最小限にするもので、パブリックAPIなどによく用いられます。RESTはURI、 [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) 、そして、GET、POST、PUT、 DELETE、PATCHなどのHTTP動詞等のよりジェネリックで統一されたメソッドを用います。ステートレスであるのでRESTは水平スケーリングやパーティショニングに最適です。
RESTはデータを公開することに焦点を当てています。クライアントとサーバーのカップリングを最小限にするもので、パブリックAPIなどによく用いられます。RESTはURI、 [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md)、そして、GET、POST、PUT、 DELETE、PATCHなどのHTTP動詞等のよりジェネリックで統一されたメソッドを用います。ステートレスであるのでRESTは水平スケーリングやパーティショニングに最適です。
#### 欠点: REST
@ -1499,30 +1499,30 @@ RESTはデータを公開することに焦点を当てています。クライ
#### その他の参考資料、ページ: REST と RPC
* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## セキュリティ
このセクションは更新が必要です。[contributing](#contributing) してください!
このセクションは更新が必要です。[contributing](#contributing)してください!
セキュリティは幅広いトピックです。十分な経験、セキュリティ分野のバックグラウンドがなくても、セキュリティの知識を要する職に応募するのでない限り、基本以上のことを知る必要はないでしょう。
* 情報伝達、保存における暗号化
* [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) や [SQL injection](https://en.wikipedia.org/wiki/SQL_injection) を防ぐために、全てのユーザー入力もしくはユーザーに露出される入力パラメーターをサニタイズする
* [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) や [SQL injection](https://en.wikipedia.org/wiki/SQL_injection)を防ぐために、全てのユーザー入力もしくはユーザーに露出される入力パラメーターをサニタイズする
* SQL injectionを防ぐためにパラメータ化されたクエリを用いる。
* [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) の原理を用いる
* [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)の原理を用いる
### その他の参考資料、ページ:
* [開発者のためのセキュリティガイド](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
* [開発者のためのセキュリティガイド](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## 補遺
@ -1545,7 +1545,7 @@ RESTはデータを公開することに焦点を当てています。クライ
#### その他の参考資料、ページ:
* [2の乗数表](https://en.wikipedia.org/wiki/Power_of_two)
* [2の乗数表](https://en.wikipedia.org/wiki/Power_of_two)
### 全てのプログラマーが知るべきレイテンシー値
@ -1586,14 +1586,14 @@ Notes
#### レイテンシーの視覚的表
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
#### その他の参考資料、ページ:
* [全てのプログラマーが知るべきレイテンシー値 - 1](https://gist.github.com/jboner/2841832)
* [全てのプログラマーが知るべきレイテンシー値 - 2](https://gist.github.com/hellerbarde/2843375)
* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
* [全てのプログラマーが知るべきレイテンシー値 - 1](https://gist.github.com/jboner/2841832)
* [全てのプログラマーが知るべきレイテンシー値 - 2](https://gist.github.com/hellerbarde/2843375)
* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
### 他のシステム設計面接例題
@ -1602,26 +1602,26 @@ Notes
| 質問 | 解答 |
|---|---|
| Dropboxのようなファイル同期サービスを設計する | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| Googleのような検索エンジンの設計 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) <br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) <br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) <br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Googleのような検索エンジンの設計 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407)<br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search)<br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/)<br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Googleのようなスケーラブルなwebクローラーの設計 | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| Google docsの設計 | [code.google.com](https://code.google.com/p/google-mobwrite/) <br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Google docsの設計 | [code.google.com](https://code.google.com/p/google-mobwrite/)<br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Redisのようなキーバリューストアの設計 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| Memcachedのようなキャッシュシステムの設計 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Amazonのようなレコメンデーションシステムの設計 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) <br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| Amazonのようなレコメンデーションシステムの設計 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html)<br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| BitlyのようなURL短縮サービスの設計 | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| WhatsAppのようなチャットアプリの設計 | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
| Instagramのような写真共有サービスの設計 | [highscalability.com](http://highscalability.com/flickr-architecture) <br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| Facebookニュースフィードの設計 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) <br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) <br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| Facebookタイムラインの設計 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) <br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| Facebookチャットの設計 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) <br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| Facebookのようなgraph検索の設計 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| WhatsAppのようなチャットアプリの設計 | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
| Instagramのような写真共有サービスの設計 | [highscalability.com](http://highscalability.com/flickr-architecture)<br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| Facebookニュースフィードの設計 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed)<br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed)<br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| Facebookタイムラインの設計 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920)<br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| Facebookチャットの設計 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf)<br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| Facebookのようなgraph検索の設計 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| CloudFlareのようなCDNの設計 | [cmu.edu](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci) |
| Twitterのトレンド機能の設計 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) <br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| ランダムID発行システムの設計 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) <br/>[github.com](https://github.com/twitter/snowflake/) |
| 一定のインターバル時間での上位k件を返す | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf) <br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| Twitterのトレンド機能の設計 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/)<br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| ランダムID発行システムの設計 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake)<br/>[github.com](https://github.com/twitter/snowflake/) |
| 一定のインターバル時間での上位k件を返す | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf)<br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| 複数のデータセンターからデータを配信するサービスの設計 | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| オンラインの複数プレイヤーカードゲームの設計 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) <br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| ガーベッジコレクションシステムの設計 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) <br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| オンラインの複数プレイヤーカードゲームの設計 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html)<br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| ガーベッジコレクションシステムの設計 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/)<br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| システム設計例題を追加する | [Contribute](#contributing) |
### 実世界のアーキテクチャ
@ -1648,18 +1648,18 @@ Notes
| | | |
| データストア | **Bigtable** - Googleのカラム指向分散データベース | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| データストア | **HBase** - Bigtableのオープンソース実装 | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
| データストア | **Cassandra** - Facebookのカラム指向分散データベース | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| データストア | **Cassandra** - Facebookのカラム指向分散データベース | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| データストア | **DynamoDB** - Amazonのドキュメント指向分散データベース | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| データストア | **MongoDB** - ドキュメント指向分散データベース | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| データストア | **Spanner** - Googleのグローバル分散データベース | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| データストア | **Memcached** - 分散メモリーキャッシングシステム | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| データストア | **Redis** - 永続性とバリュータイプを兼ね備えた分散メモリーキャッシングシステム | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
| ファイルシステム | **Google File System (GFS) ** - 分散ファイルシステム | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| ファイルシステム | **Hadoop File System (HDFS) ** - GFSのオープンソース実装 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| ファイルシステム | **Google File System (GFS)** - 分散ファイルシステム | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| ファイルシステム | **Hadoop File System (HDFS)** - GFSのオープンソース実装 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| | | |
| Misc | **Chubby** - 疎結合の分散システムをロックするGoogleのサービス | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
| Misc | **Dapper** - 分散システムを追跡するインフラ | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
| Misc | **Dapper** - 分散システムを追跡するインフラ | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
| Misc | **Kafka** - LinkedInによるPub/subメッセージキュー | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| Misc | **Zookeeper** - 同期を可能にする中央集権インフラとサービス | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | アーキテクチャを追加する | [Contribute](#contributing) |
@ -1674,22 +1674,22 @@ Notes
| DropBox | [How we've scaled Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [Operating At 100,000 duh nuh nuhs per second](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google architecture](http://highscalability.com/google-architecture) |
| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) <br/>[What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html)<br/>[What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv's live video broadcasting architecture](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) <br/>[TAO: Facebooks distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) <br/>[Facebooks photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf)<br/>[TAO: Facebooks distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf)<br/>[Facebooks photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Flickr | [Flickr architecture](http://highscalability.com/flickr-architecture) |
| Mailbox | [From 0 to one million users in 6 weeks](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) <br/>[18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html)<br/>[18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [50 million monthly users and growing](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish architecture](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [How they handle 1.3 billion transactions a day](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow architecture](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [40M visitors, 200M dynamic page views, 30TB data](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [15 billion page views a month](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) <br/>[Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) <br/>[150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) <br/>[Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) <br/>[Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI) <br/>[Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster)<br/>[Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html)<br/>[150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)<br/>[Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability)<br/>[Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI)<br/>[Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) |
| WhatsApp | [The WhatsApp architecture Facebook bought for $19 billion](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8) <br/>[YouTube architecture](http://highscalability.com/youtube-architecture) |
| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8)<br/>[YouTube architecture](http://highscalability.com/youtube-architecture) |
### 企業のエンジニアブログ
@ -1697,62 +1697,62 @@ Notes
>
> 投げられる質問は同じ分野から来ることもあるでしょう
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
#### その他の参考資料、ページ:
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
ここにあるリストは比較的小規模なものにとどめ、[kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs) により詳細に記すことで重複しないようにしておくことにする。エンジニアブログへのリンクを追加する場合はここではなく、engineering-blogsレボジトリに追加することを検討してください。
ここにあるリストは比較的小規模なものにとどめ、[kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)により詳細に記すことで重複しないようにしておくことにする。エンジニアブログへのリンクを追加する場合はここではなく、engineering-blogsレボジトリに追加することを検討してください。
## 進行中の作業
セクションの追加や、進行中の作業を手伝っていただける場合は[こちら](#contributing) !
セクションの追加や、進行中の作業を手伝っていただける場合は[こちら](#contributing)!
* MapReduceによる分散コンピューティング
* Consistent hashing
* Scatter gather
* [Contribute](#contributing)
* [Contribute](#contributing)
## クレジット
@ -1760,28 +1760,28 @@ Notes
Special thanks to:
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
Feel free to contact me to discuss any issues, questions, or comments.
My contact info can be found on my [GitHub page](https://github.com/donnemartin) .
My contact info can be found on my [GitHub page](https://github.com/donnemartin).
## License
*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook) .*
*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook).*
Copyright 2017 Donne Martin
Creative Commons Attribution 4.0 International License (CC BY 4.0)
Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/

View File

@ -1,9 +1,9 @@
> * 原文地址:[github.com/donnemartin/system-design-primer](https://github.com/donnemartin/system-design-primer)
> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner)
> * 译者:[XatMassacrE](https://github.com/XatMassacrE) 、[L9m](https://github.com/L9m) 、[Airmacho](https://github.com/Airmacho) 、[xiaoyusilen](https://github.com/xiaoyusilen) 、[jifaxu](https://github.com/jifaxu) 、[根号三](https://github.com/sqrthree)
> * 原文地址:[github.com/donnemartin/system-design-primer](https://github.com/donnemartin/system-design-primer)
> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner)
> * 译者:[XatMassacrE](https://github.com/XatMassacrE)、[L9m](https://github.com/L9m)、[Airmacho](https://github.com/Airmacho)、[xiaoyusilen](https://github.com/xiaoyusilen)、[jifaxu](https://github.com/jifaxu)、[根号三](https://github.com/sqrthree)
> * 这个 [链接](https://github.com/xitu/system-design-primer/compare/master...donnemartin:master) 用来查看本翻译与英文版是否有差别(如果你没有看到 README.md 发生变化,那就意味着这份翻译文档是最新的)。
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
# 系统设计入门
@ -30,7 +30,7 @@
这是一个不断更新的开源项目的初期的版本。
欢迎[贡献](#贡献)
欢迎[贡献](#贡献)
### 为系统设计的面试做准备
@ -40,11 +40,11 @@
面试准备的其他主题:
* [学习指引](#学习指引)
* [如何处理一个系统设计的面试题](#如何处理一个系统设计的面试题)
* [系统设计的面试题,**含解答**](#系统设计的面试题和解答)
* [面向对象设计的面试题,**含解答**](#面向对象设计的面试问题及解答)
* [其它的系统设计面试题](#其它的系统设计面试题)
* [学习指引](#学习指引)
* [如何处理一个系统设计的面试题](#如何处理一个系统设计的面试题)
* [系统设计的面试题,**含解答**](#系统设计的面试题和解答)
* [面向对象设计的面试题,**含解答**](#面向对象设计的面试问题及解答)
* [其它的系统设计面试题](#其它的系统设计面试题)
## 抽认卡
@ -53,26 +53,26 @@
<br/>
</p>
这里提供的[抽认卡堆](https://apps.ankiweb.net/) 使用间隔重复的方法,帮助你记忆关键的系统设计概念。
这里提供的[抽认卡堆](https://apps.ankiweb.net/)使用间隔重复的方法,帮助你记忆关键的系统设计概念。
* [系统设计的卡堆](resources/flash_cards/System%20Design.apkg)
* [系统设计的练习卡堆](resources/flash_cards/System%20Design%20Exercises.apkg)
* [面向对象设计的练习卡堆](resources/flash_cards/OO%20Design.apkg)
* [系统设计的卡堆](resources/flash_cards/System%20Design.apkg)
* [系统设计的练习卡堆](resources/flash_cards/System%20Design%20Exercises.apkg)
* [面向对象设计的练习卡堆](resources/flash_cards/OO%20Design.apkg)
随时随地都可使用。
### 代码资源:互动式编程挑战
你正在寻找资源以准备[**编程面试**](https://github.com/donnemartin/interactive-coding-challenges) 吗?
你正在寻找资源以准备[**编程面试**](https://github.com/donnemartin/interactive-coding-challenges)吗?
<p align="center">
<img src="images/b4YtAEN.png">
<br/>
</p>
请查看我们的姐妹仓库[**互动式编程挑战**](https://github.com/donnemartin/interactive-coding-challenges) ,其中包含了一个额外的抽认卡堆:
请查看我们的姐妹仓库[**互动式编程挑战**](https://github.com/donnemartin/interactive-coding-challenges),其中包含了一个额外的抽认卡堆:
* [代码卡堆](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
* [代码卡堆](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## 贡献
@ -83,11 +83,11 @@
* 修复错误
* 完善章节
* 添加章节
* [帮助翻译](https://github.com/donnemartin/system-design-primer/issues/28)
* [帮助翻译](https://github.com/donnemartin/system-design-primer/issues/28)
一些还需要完善的内容放在了[正在完善中](#正在完善中)
一些还需要完善的内容放在了[正在完善中](#正在完善中)。
请查看[贡献指南](CONTRIBUTING.md)
请查看[贡献指南](CONTRIBUTING.md)。
## 系统设计主题的索引
@ -101,92 +101,92 @@
<br/>
</p>
* [系统设计主题:从这里开始](#系统设计主题从这里开始)
* [第一步:回顾可扩展性的视频讲座](#第一步回顾可扩展性scalability的视频讲座)
* [第二步:回顾可扩展性的文章](#第二步回顾可扩展性文章)
* [接下来的步骤](#接下来的步骤)
* [性能与拓展性](#性能与可扩展性)
* [延迟与吞吐量](#延迟与吞吐量)
* [可用性与一致性](#可用性与一致性)
* [CAP 理论](#cap-理论)
* [CP - 一致性和分区容错性](#cp--一致性和分区容错性)
* [AP - 可用性和分区容错性](#ap--可用性与分区容错性)
* [一致模式](#一致性模式)
* [弱一致性](#弱一致性)
* [最终一致性](#最终一致性)
* [强一致性](#强一致性)
* [可用模式](#可用性模式)
* [故障切换](#故障切换)
* [复制](#复制)
* [域名系统](#域名系统)
* [CDN](#内容分发网络cdn)
* [CDN 推送](#cdn-推送push)
* [CDN 拉取](#cdn-拉取pull)
* [负载均衡器](#负载均衡器)
* [工作到备用切换Active-passive](#工作到备用切换active-passive)
* [双工作切换Active-active](#双工作切换active-active)
* [四层负载均衡](#四层负载均衡)
* [七层负载均衡](#七层负载均衡器)
* [水平扩展](#水平扩展)
* [反向代理web 服务器)](#反向代理web-服务器)
* [负载均衡与反向代理](#负载均衡器与反向代理)
* [应用层](#应用层)
* [微服务](#微服务)
* [服务发现](#服务发现)
* [数据库](#数据库)
* [关系型数据库管理系统RDBMS](#关系型数据库管理系统rdbms)
* [Master-slave 复制集](#主从复制)
* [Master-master 复制集](#主主复制)
* [联合](#联合)
* [分片](#分片)
* [非规范化](#非规范化)
* [SQL 调优](#sql-调优)
* [NoSQL](#nosql)
* [Key-value 存储](#键-值存储)
* [文档存储](#文档类型存储)
* [宽列存储](#列型存储)
* [图数据库](#图数据库)
* [SQL 还是 NoSQL](#sql-还是-nosql)
* [缓存](#缓存)
* [客户端缓存](#客户端缓存)
* [CDN 缓存](#cdn-缓存)
* [Web 服务器缓存](#web-服务器缓存)
* [数据库缓存](#数据库缓存)
* [应用缓存](#应用缓存)
* [数据库查询级别的缓存](#数据库查询级别的缓存)
* [对象级别的缓存](#对象级别的缓存)
* [何时更新缓存](#何时更新缓存)
* [缓存模式](#缓存模式)
* [直写模式](#直写模式)
* [回写模式](#回写模式)
* [刷新](#刷新)
* [异步](#异步)
* [消息队列](#消息队列)
* [任务队列](#任务队列)
* [背压机制](#背压)
* [通讯](#通讯)
* [传输控制协议TCP](#传输控制协议tcp)
* [用户数据报协议UDP](#用户数据报协议udp)
* [远程控制调用协议RPC](#远程过程调用协议rpc)
* [表述性状态转移REST](#表述性状态转移rest)
* [安全](#安全)
* [附录](#附录)
* [2 的次方表](#2-的次方表)
* [每个程序员都应该知道的延迟数](#每个程序员都应该知道的延迟数)
* [其它的系统设计面试题](#其它的系统设计面试题)
* [真实架构](#真实架构)
* [公司的系统架构](#公司的系统架构)
* [公司工程博客](#公司工程博客)
* [正在完善中](#正在完善中)
* [致谢](#致谢)
* [联系方式](#联系方式)
* [许可](#许可)
* [系统设计主题:从这里开始](#系统设计主题从这里开始)
* [第一步:回顾可扩展性的视频讲座](#第一步回顾可扩展性scalability的视频讲座)
* [第二步:回顾可扩展性的文章](#第二步回顾可扩展性文章)
* [接下来的步骤](#接下来的步骤)
* [性能与拓展性](#性能与可扩展性)
* [延迟与吞吐量](#延迟与吞吐量)
* [可用性与一致性](#可用性与一致性)
* [CAP 理论](#cap-理论)
* [CP - 一致性和分区容错性](#cp--一致性和分区容错性)
* [AP - 可用性和分区容错性](#ap--可用性与分区容错性)
* [一致模式](#一致性模式)
* [弱一致性](#弱一致性)
* [最终一致性](#最终一致性)
* [强一致性](#强一致性)
* [可用模式](#可用性模式)
* [故障切换](#故障切换)
* [复制](#复制)
* [域名系统](#域名系统)
* [CDN](#内容分发网络cdn)
* [CDN 推送](#cdn-推送push)
* [CDN 拉取](#cdn-拉取pull)
* [负载均衡器](#负载均衡器)
* [工作到备用切换Active-passive](#工作到备用切换active-passive)
* [双工作切换Active-active](#双工作切换active-active)
* [四层负载均衡](#四层负载均衡)
* [七层负载均衡](#七层负载均衡器)
* [水平扩展](#水平扩展)
* [反向代理web 服务器)](#反向代理web-服务器)
* [负载均衡与反向代理](#负载均衡器与反向代理)
* [应用层](#应用层)
* [微服务](#微服务)
* [服务发现](#服务发现)
* [数据库](#数据库)
* [关系型数据库管理系统RDBMS](#关系型数据库管理系统rdbms)
* [Master-slave 复制集](#主从复制)
* [Master-master 复制集](#主主复制)
* [联合](#联合)
* [分片](#分片)
* [非规范化](#非规范化)
* [SQL 调优](#sql-调优)
* [NoSQL](#nosql)
* [Key-value 存储](#键-值存储)
* [文档存储](#文档类型存储)
* [宽列存储](#列型存储)
* [图数据库](#图数据库)
* [SQL 还是 NoSQL](#sql-还是-nosql)
* [缓存](#缓存)
* [客户端缓存](#客户端缓存)
* [CDN 缓存](#cdn-缓存)
* [Web 服务器缓存](#web-服务器缓存)
* [数据库缓存](#数据库缓存)
* [应用缓存](#应用缓存)
* [数据库查询级别的缓存](#数据库查询级别的缓存)
* [对象级别的缓存](#对象级别的缓存)
* [何时更新缓存](#何时更新缓存)
* [缓存模式](#缓存模式)
* [直写模式](#直写模式)
* [回写模式](#回写模式)
* [刷新](#刷新)
* [异步](#异步)
* [消息队列](#消息队列)
* [任务队列](#任务队列)
* [背压机制](#背压)
* [通讯](#通讯)
* [传输控制协议TCP](#传输控制协议tcp)
* [用户数据报协议UDP](#用户数据报协议udp)
* [远程控制调用协议RPC](#远程过程调用协议rpc)
* [表述性状态转移REST](#表述性状态转移rest)
* [安全](#安全)
* [附录](#附录)
* [2 的次方表](#2-的次方表)
* [每个程序员都应该知道的延迟数](#每个程序员都应该知道的延迟数)
* [其它的系统设计面试题](#其它的系统设计面试题)
* [真实架构](#真实架构)
* [公司的系统架构](#公司的系统架构)
* [公司工程博客](#公司工程博客)
* [正在完善中](#正在完善中)
* [致谢](#致谢)
* [联系方式](#联系方式)
* [许可](#许可)
## 学习指引
> 基于你面试的时间线(短、中、长)去复习那些推荐的主题。
![Imgur](images/OfVllex.png)
![Imgur](images/OfVllex.png)
**问:对于面试来说,我需要知道这里的所有知识点吗?**
@ -211,18 +211,18 @@
| | 短期 | 中期 | 长期 |
| ---------------------------------------- | ---- | ---- | ---- |
| 阅读 [系统设计主题](#系统设计主题的索引) 以获得一个关于系统如何工作的宽泛的认识 | :+1: | :+1: | :+1: |
| 阅读一些你要面试的[公司工程博客](#公司工程博客) 的文章 | :+1: | :+1: | :+1: |
| 阅读 [真实架构](#真实架构) | :+1: | :+1: | :+1: |
| 复习 [如何处理一个系统设计面试题](#如何处理一个系统设计面试题) | :+1: | :+1: | :+1: |
| 完成 [系统设计的面试题和解答](#系统设计的面试题和解答) | 一些 | 很多 | 大部分 |
| 完成 [面向对象设计的面试题和解答](#面向对象设计的面试问题及解答) | 一些 | 很多 | 大部分 |
| 复习 [其它的系统设计面试题](#其它的系统设计面试题) | 一些 | 很多 | 大部分 |
| 阅读一些你要面试的[公司工程博客](#公司工程博客)的文章 | :+1: | :+1: | :+1: |
| 阅读 [真实架构](#真实架构) | :+1: | :+1: | :+1: |
| 复习 [如何处理一个系统设计面试题](#如何处理一个系统设计面试题) | :+1: | :+1: | :+1: |
| 完成 [系统设计的面试题和解答](#系统设计的面试题和解答) | 一些 | 很多 | 大部分 |
| 完成 [面向对象设计的面试题和解答](#面向对象设计的面试问题及解答) | 一些 | 很多 | 大部分 |
| 复习 [其它的系统设计面试题](#其它的系统设计面试题) | 一些 | 很多 | 大部分 |
## 如何处理一个系统设计的面试题
系统设计面试是一个**开放式的对话**。他们期望你去主导这个对话。
你可以使用下面的步骤来指引讨论。为了巩固这个过程,请使用下面的步骤完成[系统设计的面试题和解答](#系统设计的面试题和解答) 这个章节。
你可以使用下面的步骤来指引讨论。为了巩固这个过程,请使用下面的步骤完成[系统设计的面试题和解答](#系统设计的面试题和解答)这个章节。
### 第一步:描述使用场景,约束和假设
@ -246,10 +246,10 @@
### 第三步:设计核心组件
对每一个核心组件进行详细深入的分析。举例来说,如果你被问到[设计一个 url 缩写服务](solutions/system_design/pastebin/README.md) ,开始讨论:
对每一个核心组件进行详细深入的分析。举例来说,如果你被问到[设计一个 url 缩写服务](solutions/system_design/pastebin/README.md),开始讨论:
* 生成并储存一个完整 url 的 hash
* [MD5](solutions/system_design/pastebin/README.md) 和 [Base62](solutions/system_design/pastebin/README.md)
* [MD5](solutions/system_design/pastebin/README.md) 和 [Base62](solutions/system_design/pastebin/README.md)
* Hash 碰撞
* SQL 还是 NoSQL
* 数据库模型
@ -266,23 +266,23 @@
* 缓存
* 数据库分片
论述可能的解决办法和代价。每件事情需要取舍。可以使用[可扩展系统的设计原则](#系统设计主题的索引) 来处理瓶颈。
论述可能的解决办法和代价。每件事情需要取舍。可以使用[可扩展系统的设计原则](#系统设计主题的索引)来处理瓶颈。
### 预估计算量
你或许会被要求通过手算进行一些估算。[附录](#附录) 涉及到的是下面的这些资源:
你或许会被要求通过手算进行一些估算。[附录](#附录)涉及到的是下面的这些资源:
* [使用预估计算量](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2 的次方表](#2-的次方表)
* [每个程序员都应该知道的延迟数](#每个程序员都应该知道的延迟数)
* [使用预估计算量](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2 的次方表](#2-的次方表)
* [每个程序员都应该知道的延迟数](#每个程序员都应该知道的延迟数)
### 相关资源和延伸阅读
查看下面的链接以获得我们期望的更好的想法:
* [怎样通过一个系统设计的面试](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [系统设计的面试](http://www.hiredintech.com/system-design)
* [系统架构与设计的面试简介](https://www.youtube.com/watch?v=ZgdS0EUmn70)
* [怎样通过一个系统设计的面试](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [系统设计的面试](http://www.hiredintech.com/system-design)
* [系统架构与设计的面试简介](https://www.youtube.com/watch?v=ZgdS0EUmn70)
## 系统设计的面试题和解答
@ -293,7 +293,7 @@
| 问题 | |
| ---------------------------------------- | ---------------------------------------- |
| 设计 Pastebin.com (或者 Bit.ly) | [解答](solutions/system_design/pastebin/README-zh-Hans.md) |
| 设计 Pastebin.com (或者 Bit.ly) | [解答](solutions/system_design/pastebin/README-zh-Hans.md) |
| 设计 Twitter 时间线和搜索 (或者 Facebook feed 和搜索) | [解答](solutions/system_design/twitter/README.md) |
| 设计一个网页爬虫 | [解答](solutions/system_design/web_crawler/README.md) |
| 设计 Mint.com | [解答](solutions/system_design/mint/README.md) |
@ -301,55 +301,55 @@
| 为搜索引擎设计一个 key-value 储存 | [解答](solutions/system_design/query_cache/README.md) |
| 通过分类特性设计 Amazon 的销售排名 | [解答](solutions/system_design/sales_rank/README.md) |
| 在 AWS 上设计一个百万用户级别的系统 | [解答](solutions/system_design/scaling_aws/README.md) |
| 添加一个系统设计问题 | [贡献](#贡献) |
| 添加一个系统设计问题 | [贡献](#贡献) |
### 设计 Pastebin.com (或者 Bit.ly)
### 设计 Pastebin.com (或者 Bit.ly)
[查看实践与解答](solutions/system_design/pastebin/README.md)
[查看实践与解答](solutions/system_design/pastebin/README.md)
![Imgur](images/4edXG0T.png)
![Imgur](images/4edXG0T.png)
### 设计 Twitter 时间线和搜索 (或者 Facebook feed 和搜索)
### 设计 Twitter 时间线和搜索 (或者 Facebook feed 和搜索)
[查看实践与解答](solutions/system_design/twitter/README.md)
[查看实践与解答](solutions/system_design/twitter/README.md)
![Imgur](images/jrUBAF7.png)
![Imgur](images/jrUBAF7.png)
### 设计一个网页爬虫
[查看实践与解答](solutions/system_design/web_crawler/README.md)
[查看实践与解答](solutions/system_design/web_crawler/README.md)
![Imgur](images/bWxPtQA.png)
![Imgur](images/bWxPtQA.png)
### 设计 Mint.com
[查看实践与解答](solutions/system_design/mint/README.md)
[查看实践与解答](solutions/system_design/mint/README.md)
![Imgur](images/V5q57vU.png)
![Imgur](images/V5q57vU.png)
### 为一个社交网络设计数据结构
[查看实践与解答](solutions/system_design/social_graph/README.md)
[查看实践与解答](solutions/system_design/social_graph/README.md)
![Imgur](images/cdCv5g7.png)
![Imgur](images/cdCv5g7.png)
### 为搜索引擎设计一个 key-value 储存
[查看实践与解答](solutions/system_design/query_cache/README.md)
[查看实践与解答](solutions/system_design/query_cache/README.md)
![Imgur](images/4j99mhe.png)
![Imgur](images/4j99mhe.png)
### 设计按类别分类的 Amazon 销售排名
[查看实践与解答](solutions/system_design/sales_rank/README.md)
[查看实践与解答](solutions/system_design/sales_rank/README.md)
![Imgur](images/MzExP06.png)
![Imgur](images/MzExP06.png)
### 在 AWS 上设计一个百万用户级别的系统
[查看实践与解答](solutions/system_design/scaling_aws/README.md)
[查看实践与解答](solutions/system_design/scaling_aws/README.md)
![Imgur](images/jj3A5N8.png)
![Imgur](images/jj3A5N8.png)
## 面向对象设计的面试问题及解答
@ -367,8 +367,8 @@
| 设计一副牌 | [解决方案](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| 设计一个停车场 | [解决方案](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
| 设计一个聊天服务 | [解决方案](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| 设计一个环形数组 | [待解决](#贡献) |
| 添加一个面向对象设计问题 | [待解决](#贡献) |
| 设计一个环形数组 | [待解决](#贡献) |
| 添加一个面向对象设计问题 | [待解决](#贡献) |
## 系统设计主题:从这里开始
@ -378,7 +378,7 @@
### 第一步回顾可扩展性scalability的视频讲座
[哈佛大学可扩展性讲座](https://www.youtube.com/watch?v=-W9F__D3oY4)
[哈佛大学可扩展性讲座](https://www.youtube.com/watch?v=-W9F__D3oY4)
* 主题涵盖
* 垂直扩展Vertical scaling
@ -390,13 +390,13 @@
### 第二步:回顾可扩展性文章
[可扩展性](http://www.lecloud.net/tagged/scalability/chrono)
[可扩展性](http://www.lecloud.net/tagged/scalability/chrono)
* 主题涵盖:
* [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [数据库](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [缓存](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [异步](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
* [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [数据库](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [缓存](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [异步](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### 接下来的步骤
@ -421,8 +421,8 @@
### 来源及延伸阅读
* [简单谈谈可扩展性](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [可扩展性,可用性,稳定性和模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [简单谈谈可扩展性](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [可扩展性,可用性,稳定性和模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## 延迟与吞吐量
@ -434,7 +434,7 @@
### 来源及延伸阅读
* [理解延迟与吞吐量](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
* [理解延迟与吞吐量](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## 可用性与一致性
@ -462,17 +462,17 @@
响应节点上可用数据的最近版本可能并不是最新的。当分区解析完后,写入(操作)可能需要一些时间来传播。
如果业务需求允许[最终一致性](#最终一致性) 或当有外部故障时要求系统继续运行AP 是一个不错的选择。
如果业务需求允许[最终一致性](#最终一致性)或当有外部故障时要求系统继续运行AP 是一个不错的选择。
### 来源及延伸阅读
* [再看 CAP 理论](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [通俗易懂地介绍 CAP 理论](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP FAQ](https://github.com/henryr/cap-faq)
* [再看 CAP 理论](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [通俗易懂地介绍 CAP 理论](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP FAQ](https://github.com/henryr/cap-faq)
## 一致性模式
有同一份数据的多份副本,我们面临着怎样同步它们的选择,以便让客户端有一致的显示数据。回想 [CAP 理论](#cap-理论) 中的一致性定义 ─ 每次访问都能获得最新数据但可能会收到错误响应
有同一份数据的多份副本,我们面临着怎样同步它们的选择,以便让客户端有一致的显示数据。回想 [CAP 理论](#cap-理论)中的一致性定义 ─ 每次访问都能获得最新数据但可能会收到错误响应
### 弱一致性
@ -495,7 +495,7 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
### 来源及延伸阅读
* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
## 可用性模式
@ -528,10 +528,10 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
#### 主─从复制和主─主复制
这个主题进一步探讨了[数据库](#数据库) 部分:
这个主题进一步探讨了[数据库](#数据库)部分:
* [主─从复制](#主从复制)
* [主─主复制](#主主复制)
* [主─从复制](#主从复制)
* [主─主复制](#主主复制)
## 域名系统
@ -543,7 +543,7 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
域名系统是把 www.example.com 等域名转换成 IP 地址。
域名系统是分层次的,一些 DNS 服务器位于顶层。当查询(域名) IP 时,路由或 ISP 提供连接 DNS 服务器的信息。较底层的 DNS 服务器缓存映射,它可能会因为 DNS 传播延时而失效。DNS 结果可以缓存在浏览器或操作系统中一段时间,时间长短取决于[存活时间 TTL](https://en.wikipedia.org/wiki/Time_to_live)
域名系统是分层次的,一些 DNS 服务器位于顶层。当查询(域名) IP 时,路由或 ISP 提供连接 DNS 服务器的信息。较底层的 DNS 服务器缓存映射,它可能会因为 DNS 传播延时而失效。DNS 结果可以缓存在浏览器或操作系统中一段时间,时间长短取决于[存活时间 TTL](https://en.wikipedia.org/wiki/Time_to_live)。
* **NS 记录(域名服务)** ─ 指定解析域名或子域名的 DNS 服务器。
* **MX 记录(邮件交换)** ─ 指定接收信息的邮件服务器。
@ -552,7 +552,7 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
[CloudFlare](https://www.cloudflare.com/dns/) 和 [Route 53](https://aws.amazon.com/route53/) 等平台提供管理 DNS 的功能。某些 DNS 服务通过集中方式来路由流量:
* [加权轮询调度](http://g33kinfo.com/info/archives/2657)
* [加权轮询调度](http://g33kinfo.com/info/archives/2657)
* 防止流量进入维护中的服务器
* 在不同大小集群间负载均衡
* A/B 测试
@ -562,14 +562,14 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
### 缺陷:DNS
* 虽说缓存可以减轻 DNS 延迟,但连接 DNS 服务器还是带来了轻微的延迟。
* 虽然它们通常由[政府,网络服务提供商和大公司](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) 管理,但 DNS 服务管理仍可能是复杂的。
* DNS 服务最近遭受 [DDoS 攻击](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/) ,阻止不知道 Twitter IP 地址的用户访问 Twitter。
* 虽然它们通常由[政府,网络服务提供商和大公司](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729)管理,但 DNS 服务管理仍可能是复杂的。
* DNS 服务最近遭受 [DDoS 攻击](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/),阻止不知道 Twitter IP 地址的用户访问 Twitter。
### 来源及延伸阅读
* [DNS 架构](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [关于 DNS 的文章](https://support.dnsimple.com/categories/dns/)
* [DNS 架构](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [关于 DNS 的文章](https://support.dnsimple.com/categories/dns/)
## 内容分发网络CDN
@ -594,7 +594,7 @@ DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性
CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源。你将内容留在自己的服务器上并重写 URL 指向 CDN 地址。直到内容被缓存在 CDN 上为止,这样请求只会更慢,
[存活时间TTL](https://en.wikipedia.org/wiki/Time_to_live) 决定缓存多久时间。CDN 拉取方式最小化 CDN 上的储存空间,但如果过期文件并在实际更改之前被拉取,则会导致冗余的流量。
[存活时间TTL](https://en.wikipedia.org/wiki/Time_to_live)决定缓存多久时间。CDN 拉取方式最小化 CDN 上的储存空间,但如果过期文件并在实际更改之前被拉取,则会导致冗余的流量。
高流量站点使用 CDN 拉取效果不错,因为只有最近请求的内容保存在 CDN 中,流量才能更平衡地分散。
@ -606,9 +606,9 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
### 来源及延伸阅读
* [全球性内容分发网络](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [CDN 拉取和 CDN 推送的区别](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
* [全球性内容分发网络](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [CDN 拉取和 CDN 推送的区别](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
## 负载均衡器
@ -628,7 +628,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
增加的好处包括:
* **SSL 终结** ─ 解密传入的请求并加密服务器响应,这样的话后端服务器就不必再执行这些潜在高消耗运算了。
* 不需要再每台服务器上安装 [X.509 证书](https://en.wikipedia.org/wiki/X.509)
* 不需要再每台服务器上安装 [X.509 证书](https://en.wikipedia.org/wiki/X.509)。
* **Session 留存** ─ 如果 Web 应用程序不追踪会话,发出 cookie 并将特定客户端的请求路由到同一实例。
通常会设置采用[工作─备用](#工作到备用切换active-passive) 或 [双工作](#双工作切换active-active) 模式的多个负载均衡器,以免发生故障。
@ -638,17 +638,17 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
* 随机
* 最少负载
* Session/cookie
* [轮询调度或加权轮询调度算法](http://g33kinfo.com/info/archives/2657)
* [四层负载均衡](#四层负载均衡)
* [七层负载均衡](#七层负载均衡)
* [轮询调度或加权轮询调度算法](http://g33kinfo.com/info/archives/2657)
* [四层负载均衡](#四层负载均衡)
* [七层负载均衡](#七层负载均衡)
### 四层负载均衡
四层负载均衡根据监看[传输层](#通讯) 的信息来决定如何分发请求。通常,这会涉及来源,目标 IP 地址和请求头中的端口,但不包括数据包(报文)内容。四层负载均衡执行[网络地址转换NAT](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) 来向上游服务器转发网络数据包。
四层负载均衡根据监看[传输层](#通讯)的信息来决定如何分发请求。通常,这会涉及来源,目标 IP 地址和请求头中的端口,但不包括数据包(报文)内容。四层负载均衡执行[网络地址转换NAT](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)来向上游服务器转发网络数据包。
### 七层负载均衡器
七层负载均衡器根据监控[应用层](#通讯) 来决定怎样分发请求。这会涉及请求头的内容,消息和 cookie。七层负载均衡器终结网络流量读取消息做出负载均衡判定然后传送给特定服务器。比如一个七层负载均衡器能直接将视频流量连接到托管视频的服务器同时将更敏感的用户账单流量引导到安全性更强的服务器。
七层负载均衡器根据监控[应用层](#通讯)来决定怎样分发请求。这会涉及请求头的内容,消息和 cookie。七层负载均衡器终结网络流量读取消息做出负载均衡判定然后传送给特定服务器。比如一个七层负载均衡器能直接将视频流量连接到托管视频的服务器同时将更敏感的用户账单流量引导到安全性更强的服务器。
以损失灵活性为代价,四层负载均衡比七层负载均衡花费更少时间和计算资源,虽然这对现代商用硬件的性能影响甚微。
@ -660,7 +660,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
* 水平扩展引入了复杂度并涉及服务器复制
* 服务器应该是无状态的:它们也不该包含像 session 或资料图片等与用户关联的数据。
* session 可以集中存储在数据库或持久化[缓存](#缓存) Redis、Memcached的数据存储区中。
* session 可以集中存储在数据库或持久化[缓存](#缓存)Redis、Memcached的数据存储区中。
* 缓存和数据库等下游服务器需要随着上游服务器进行扩展,以处理更多的并发连接。
### 缺陷:负载均衡器
@ -671,13 +671,13 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
### 来源及延伸阅读
* [NGINX 架构](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架构指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [可扩展性](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [NGINX 架构](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架构指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [可扩展性](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
* [四层负载平衡](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [七层负载平衡](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB 监听器配置](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
* [四层负载平衡](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [七层负载平衡](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB 监听器配置](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
## 反向代理web 服务器)
@ -713,15 +713,15 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
### 不利之处:反向代理
- 引入反向代理会增加系统的复杂度。
- 单独一个反向代理服务器仍可能发生单点故障,配置多台反向代理服务器(如[故障转移](https://en.wikipedia.org/wiki/Failover) )会进一步增加复杂度。
- 单独一个反向代理服务器仍可能发生单点故障,配置多台反向代理服务器(如[故障转移](https://en.wikipedia.org/wiki/Failover))会进一步增加复杂度。
### 来源及延伸阅读
- [反向代理与负载均衡](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
- [NGINX 架构](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
- [HAProxy 架构指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
- [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
- [反向代理与负载均衡](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
- [NGINX 架构](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
- [HAProxy 架构指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
- [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
## 应用层
@ -735,17 +735,17 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
**单一职责原则**提倡小型的,自治的服务共同合作。小团队通过提供小型的服务,可以更激进地计划增长。
应用层中的工作进程也有可以实现[异步化](#异步)
应用层中的工作进程也有可以实现[异步化](#异步)。
### 微服务
与此讨论相关的话题是 [微服务](https://en.wikipedia.org/wiki/Microservices) ,可以被描述为一系列可以独立部署的小型的,模块化服务。每个服务运行在一个独立的线程中,通过明确定义的轻量级机制通讯,共同实现业务目标。<sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
与此讨论相关的话题是 [微服务](https://en.wikipedia.org/wiki/Microservices),可以被描述为一系列可以独立部署的小型的,模块化服务。每个服务运行在一个独立的线程中,通过明确定义的轻量级机制通讯,共同实现业务目标。<sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
例如Pinterest 可能有这些微服务: 用户资料、关注者、Feed 流、搜索、照片上传等。
### 服务发现
像 [Consul](https://www.consul.io/docs/index.html) [Etcd](https://coreos.com/etcd/docs/latest) 和 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) 这样的系统可以通过追踪注册名、地址、端口等信息来帮助服务互相发现对方。[Health checks](https://www.consul.io/intro/getting-started/checks.html) 可以帮助确认服务的完整性和是否经常使用一个 [HTTP](#超文本传输协议http) 路径。Consul 和 Etcd 都有一个内建的 [key-value 存储](#键-值存储) 用来存储配置信息和其他的共享信息。
像 [Consul](https://www.consul.io/docs/index.html)[Etcd](https://coreos.com/etcd/docs/latest) 和 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) 这样的系统可以通过追踪注册名、地址、端口等信息来帮助服务互相发现对方。[Health checks](https://www.consul.io/intro/getting-started/checks.html) 可以帮助确认服务的完整性和是否经常使用一个 [HTTP](#超文本传输协议http) 路径。Consul 和 Etcd 都有一个内建的 [key-value 存储](#键-值存储) 用来存储配置信息和其他的共享信息。
### 不利之处:应用层
@ -755,11 +755,11 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
### 来源及延伸阅读
- [可缩放系统构架介绍](http://lethain.com/introduction-to-architecting-systems-for-scale)
- [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
- [面向服务架构](https://en.wikipedia.org/wiki/Service-oriented_architecture)
- [Zookeeper 介绍](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
- [构建微服务,你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
- [可缩放系统构架介绍](http://lethain.com/introduction-to-architecting-systems-for-scale)
- [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
- [面向服务架构](https://en.wikipedia.org/wiki/Service-oriented_architecture)
- [Zookeeper 介绍](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
- [构建微服务,你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## 数据库
@ -775,7 +775,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
> 校对注:这里作者 SQL 可能指的是 MySQL
**ACID** 用来描述关系型数据库[事务](https://en.wikipedia.org/wiki/Database_transaction) 的特性。
**ACID** 用来描述关系型数据库[事务](https://en.wikipedia.org/wiki/Database_transaction)的特性。
- **原子性** - 每个事务内部所有操作要么全部完成,要么全部不完成。
- **一致性** - 任何事务都使数据库从一个有效的状态转换到另一个有效状态。
@ -797,7 +797,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
##### 不利之处:主从复制
- 将从库提升为主库需要额外的逻辑。
- 参考[不利之处:复制](#不利之处复制) 中,主从复制和主主复制**共同**的问题。
- 参考[不利之处:复制](#不利之处复制)中,主从复制和主主复制**共同**的问题。
<p align="center">
<img src="images/krAHLGg.png">
@ -814,7 +814,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
- 你需要添加负载均衡器或者在应用逻辑中做改动,来确定写入哪一个数据库。
- 多数主-主系统要么不能保证一致性(违反 ACID要么因为同步产生了写入延迟。
- 随着更多写入节点的加入和延迟的提高,如何解决冲突显得越发重要。
- 参考[不利之处:复制](#不利之处复制) 中,主从复制和主主复制**共同**的问题。
- 参考[不利之处:复制](#不利之处复制)中,主从复制和主主复制**共同**的问题。
##### 不利之处:复制
@ -829,8 +829,8 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
##### 来源及延伸阅读
- [扩展性,可用性,稳定性模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
- [多主复制](https://en.wikipedia.org/wiki/Multi-master_replication)
- [扩展性,可用性,稳定性模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
- [多主复制](https://en.wikipedia.org/wiki/Multi-master_replication)
#### 联合
@ -852,7 +852,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
##### 来源及延伸阅读:联合
- [扩展你的用户数到第一个一千万](https://www.youtube.com/watch?v=w95murBkYmU)
- [扩展你的用户数到第一个一千万](https://www.youtube.com/watch?v=w95murBkYmU)
#### 分片
@ -864,7 +864,7 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
分片将数据分配在不同的数据库上,使得每个数据库仅管理整个数据集的一个子集。以用户数据库为例,随着用户数量的增加,越来越多的分片会被添加到集群中。
类似[联合](#联合) 的优点,分片可以减少读取和写入流量,减少复制并提高缓存命中率。也减少了索引,通常意味着查询更快,性能更好。如果一个分片出问题,其他的仍能运行,你可以使用某种形式的冗余来防止数据丢失。类似联合,没有只能串行写入的中心化主库,你可以并行写入,提高负载能力。
类似[联合](#联合)的优点,分片可以减少读取和写入流量,减少复制并提高缓存命中率。也减少了索引,通常意味着查询更快,性能更好。如果一个分片出问题,其他的仍能运行,你可以使用某种形式的冗余来防止数据丢失。类似联合,没有只能串行写入的中心化主库,你可以并行写入,提高负载能力。
常见的做法是用户姓氏的首字母或者用户的地理位置来分隔用户表。
@ -872,21 +872,21 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
- 你需要修改应用程序的逻辑来实现分片,这会带来复杂的 SQL 查询。
- 分片不合理可能导致数据负载不均衡。例如,被频繁访问的用户数据会导致其所在分片的负载相对其他分片高。
- 再平衡会引入额外的复杂度。基于[一致性哈希](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html) 的分片算法可以减少这种情况。
- 再平衡会引入额外的复杂度。基于[一致性哈希](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)的分片算法可以减少这种情况。
- 联结多个分片的数据操作更复杂。
- 分片需要更多的硬件和额外的复杂度。
#### 来源及延伸阅读:分片
- [分片时代来临](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
- [分片时代来临](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
- [数据库分片架构](https://en.wikipedia.org/wiki/Shard_(database_architecture))
- [一致性哈希](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
- [一致性哈希](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### 非规范化
非规范化试图以写入性能为代价来换取读取性能。在多个表中冗余数据副本,以避免高成本的联结操作。一些关系型数据库,比如 [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) 和 Oracle 支持[物化视图](https://en.wikipedia.org/wiki/Materialized_view) ,可以处理冗余信息存储和保证冗余副本一致。
非规范化试图以写入性能为代价来换取读取性能。在多个表中冗余数据副本,以避免高成本的联结操作。一些关系型数据库,比如 [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) 和 Oracle 支持[物化视图](https://en.wikipedia.org/wiki/Materialized_view),可以处理冗余信息存储和保证冗余副本一致。
当数据使用诸如[联合](#联合) 和[分片](#分片) 等技术被分割,进一步提高了处理跨数据中心的联结操作复杂度。非规范化可以规避这种复杂的联结操作。
当数据使用诸如[联合](#联合)和[分片](#分片)等技术被分割,进一步提高了处理跨数据中心的联结操作复杂度。非规范化可以规避这种复杂的联结操作。
在多数系统中,读取操作的频率远高于写入操作,比例可达到 100:1甚至 1000:1。需要复杂的数据库联结的读取操作成本非常高在磁盘操作上消耗了大量时间。
@ -898,16 +898,16 @@ CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源
##### 来源及延伸阅读:非规范化
- [非规范化](https://en.wikipedia.org/wiki/Denormalization)
- [非规范化](https://en.wikipedia.org/wiki/Denormalization)
#### SQL 调优
SQL 调优是一个范围很广的话题,有很多相关的[书](https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=sql+tuning) 可以作为参考。
SQL 调优是一个范围很广的话题,有很多相关的[书](https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=sql+tuning)可以作为参考。
利用**基准测试**和**性能分析**来模拟和发现系统瓶颈很重要。
- **基准测试** - 用 [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) 等工具模拟高负载情况。
- **性能分析** - 通过启用如[慢查询日志](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) 等工具来辅助追踪性能问题。
- **性能分析** - 通过启用如[慢查询日志](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)等工具来辅助追踪性能问题。
基准测试和性能分析可能会指引你到以下优化方案。
@ -920,13 +920,13 @@ SQL 调优是一个范围很广的话题,有很多相关的[书](https://www.a
- 使用 `INT` 类型存储高达 2^32 或 40 亿的较大数字。
- 使用 `DECIMAL` 类型存储货币可以避免浮点数表示错误。
- 避免使用 `BLOBS` 存储实际对象,而是用来存储存放对象的位置。
- `VARCHAR(255) ` 是以 8 位数字存储的最大字符数,在某些关系型数据库中,最大限度地利用字节。
- 在适用场景中设置 `NOT NULL` 约束来[提高搜索性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
- `VARCHAR(255)` 是以 8 位数字存储的最大字符数,在某些关系型数据库中,最大限度地利用字节。
- 在适用场景中设置 `NOT NULL` 约束来[提高搜索性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)。
##### 使用正确的索引
- 你正查询(`SELECT`、`GROUP BY`、`ORDER BY`、`JOIN`)的列如果用了索引会更快。
- 索引通常表示为自平衡的 [B 树](https://en.wikipedia.org/wiki/B-tree) ,可以保持数据有序,并允许在对数时间内进行搜索,顺序访问,插入,删除操作。
- 索引通常表示为自平衡的 [B 树](https://en.wikipedia.org/wiki/B-tree),可以保持数据有序,并允许在对数时间内进行搜索,顺序访问,插入,删除操作。
- 设置索引,会将数据存在内存中,占用了更多内存空间。
- 写入操作会变慢,因为索引需要被更新。
- 加载大量数据时,禁用索引再加载数据,然后重建索引,这样也许会更快。
@ -941,20 +941,20 @@ SQL 调优是一个范围很广的话题,有很多相关的[书](https://www.a
##### 调优查询缓存
- 在某些情况下,[查询缓存](http://dev.mysql.com/doc/refman/5.7/en/query-cache) 可能会导致[性能问题](https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/)
- 在某些情况下,[查询缓存](http://dev.mysql.com/doc/refman/5.7/en/query-cache)可能会导致[性能问题](https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/)。
##### 来源及延伸阅读
- [MySQL 查询优化小贴士](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
- [为什么 VARCHAR(255) 很常见?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
- [Null 值是如何影响数据库性能的?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
- [慢查询日志](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
- [MySQL 查询优化小贴士](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
- [为什么 VARCHAR(255) 很常见?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
- [Null 值是如何影响数据库性能的?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
- [慢查询日志](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
NoSQL 是**键-值数据库**、**文档型数据库**、**列型数据库**或**图数据库**的统称。数据库是非规范化的,表联结大多在应用程序代码中完成。大多数 NoSQL 无法实现真正符合 ACID 的事务,支持[最终一致](#最终一致性)
NoSQL 是**键-值数据库**、**文档型数据库**、**列型数据库**或**图数据库**的统称。数据库是非规范化的,表联结大多在应用程序代码中完成。大多数 NoSQL 无法实现真正符合 ACID 的事务,支持[最终一致](#最终一致性)。
**BASE** 通常被用于描述 NoSQL 数据库的特性。相比 [CAP 理论](#cap-理论) BASE 强调可用性超过一致性。
**BASE** 通常被用于描述 NoSQL 数据库的特性。相比 [CAP 理论](#cap-理论)BASE 强调可用性超过一致性。
- **基本可用** - 系统保证可用性。
- **软状态** - 即使没有输入,系统状态也可能随着时间变化。
@ -966,7 +966,7 @@ NoSQL 是**键-值数据库**、**文档型数据库**、**列型数据库**或*
> 抽象模型:哈希表
键-值存储通常可以实现 O(1) 时间读写,用内存或 SSD 存储数据。数据存储可以按[字典顺序](https://en.wikipedia.org/wiki/Lexicographical_order) 维护键,从而实现键的高效检索。键-值存储可以用于存储元数据。
键-值存储通常可以实现 O(1) 时间读写,用内存或 SSD 存储数据。数据存储可以按[字典顺序](https://en.wikipedia.org/wiki/Lexicographical_order)维护键,从而实现键的高效检索。键-值存储可以用于存储元数据。
键-值存储性能很高,通常用于存储简单数据模型或频繁修改的数据,如存放在内存中的缓存。键-值存储提供的操作有限,如果需要更多操作,复杂度将转嫁到应用程序层面。
@ -974,10 +974,10 @@ NoSQL 是**键-值数据库**、**文档型数据库**、**列型数据库**或*
#### 来源及延伸阅读
- [键-值数据库](https://en.wikipedia.org/wiki/Key-value_database)
- [键-值存储的劣势](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
- [Redis 架构](http://qnimate.com/overview-of-redis-architecture/)
- [Memcached 架构](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
- [键-值数据库](https://en.wikipedia.org/wiki/Key-value_database)
- [键-值存储的劣势](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
- [Redis 架构](http://qnimate.com/overview-of-redis-architecture/)
- [Memcached 架构](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### 文档类型存储
@ -993,10 +993,10 @@ MongoDB 和 CouchDB 等一些文档类型存储还提供了类似 SQL 语言的
#### 来源及延伸阅读:文档类型存储
- [面向文档的数据库](https://en.wikipedia.org/wiki/Document-oriented_database)
- [MongoDB 架构](https://www.mongodb.com/mongodb-architecture)
- [CouchDB 架构](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
- [Elasticsearch 架构](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
- [面向文档的数据库](https://en.wikipedia.org/wiki/Document-oriented_database)
- [MongoDB 架构](https://www.mongodb.com/mongodb-architecture)
- [CouchDB 架构](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
- [Elasticsearch 架构](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### 列型存储
@ -1010,16 +1010,16 @@ MongoDB 和 CouchDB 等一些文档类型存储还提供了类似 SQL 语言的
类型存储的基本数据单元是列(名/值对)。列可以在列族(类似于 SQL 的数据表)中被分组。超级列族再分组普通列族。你可以使用行键独立访问每一列,具有相同行键值的列组成一行。每个值都包含版本的时间戳用于解决版本冲突。
Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) ,它影响了 Hadoop 生态系统中活跃的开源数据库 [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) 和 Facebook 的 [Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html) 。像 BigTableHBase 和 Cassandra 这样的存储系统将键以字母顺序存储,可以高效地读取键列。
Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf),它影响了 Hadoop 生态系统中活跃的开源数据库 [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) 和 Facebook 的 [Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)。像 BigTableHBase 和 Cassandra 这样的存储系统将键以字母顺序存储,可以高效地读取键列。
列型存储具备高可用性和高可扩展性。通常被用于大数据相关存储。
##### 来源及延伸阅读:列型存储
- [SQL 与 NoSQL 简史](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
- [BigTable 架构](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
- [Hbase 架构](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
- [Cassandra 架构](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
- [SQL 与 NoSQL 简史](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
- [BigTable 架构](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
- [Hbase 架构](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
- [Cassandra 架构](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
#### 图数据库
@ -1036,17 +1036,17 @@ Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.h
图数据库为存储复杂关系的数据模型,如社交网络,提供了很高的性能。它们相对较新,尚未广泛应用,查找开发工具或者资源相对较难。许多图只能通过 [REST API](#表述性状态转移rest) 访问。
##### 相关资源和延伸阅读:图
- [图数据库](https://en.wikipedia.org/wiki/Graph_database)
- [Neo4j](https://neo4j.com/)
- [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
- [图数据库](https://en.wikipedia.org/wiki/Graph_database)
- [Neo4j](https://neo4j.com/)
- [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### 来源及延伸阅读NoSQL
- [数据库术语解释](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
- [NoSQL 数据库 - 调查及决策指南](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
- [可扩展性](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
- [NoSQL 介绍](https://www.youtube.com/watch?v=qI_g07C_Q5I)
- [NoSQL 模式](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
- [数据库术语解释](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
- [NoSQL 数据库 - 调查及决策指南](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
- [可扩展性](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
- [NoSQL 介绍](https://www.youtube.com/watch?v=qI_g07C_Q5I)
- [NoSQL 模式](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQL 还是 NoSQL
@ -1087,8 +1087,8 @@ Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.h
##### 来源及延伸阅读SQL 或 NoSQL
- [扩展你的用户数到第一个千万](https://www.youtube.com/watch?v=w95murBkYmU)
- [SQL 和 NoSQL 的不同](https://www.sitepoint.com/sql-vs-nosql-differences/)
- [扩展你的用户数到第一个千万](https://www.youtube.com/watch?v=w95murBkYmU)
- [SQL 和 NoSQL 的不同](https://www.sitepoint.com/sql-vs-nosql-differences/)
## 缓存
<p align="center">
@ -1103,7 +1103,7 @@ Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.h
### 客户端缓存
缓存可以位于客户端(操作系统或者浏览器),[服务端](#反向代理web-服务器) 或者不同的缓存层。
缓存可以位于客户端(操作系统或者浏览器),[服务端](#反向代理web-服务器)或者不同的缓存层。
### CDN 缓存
@ -1111,7 +1111,7 @@ Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.h
### Web 服务器缓存
[反向代理](#反向代理web-服务器) 和缓存(比如 [Varnish](https://www.varnish-cache.org/) 可以直接提供静态和动态内容。Web 服务器同样也可以缓存请求,返回相应结果而不必连接应用服务器。
[反向代理](#反向代理web-服务器)和缓存(比如 [Varnish](https://www.varnish-cache.org/)可以直接提供静态和动态内容。Web 服务器同样也可以缓存请求,返回相应结果而不必连接应用服务器。
### 数据库缓存
@ -1119,7 +1119,7 @@ Google 发布了第一个列型存储数据库 [Bigtable](http://www.read.seas.h
### 应用缓存
基于内存的缓存比如 Memcached 和 Redis 是应用程序和数据存储之间的一种键值存储。由于数据保存在 RAM 中它比存储在磁盘上的典型数据库要快多了。RAM 比磁盘限制更多,所以例如 [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 的[缓存无效算法](https://en.wikipedia.org/wiki/Cache_algorithms) 可以将「热门数据」放在 RAM 中,而对一些比较「冷门」的数据不做处理。
基于内存的缓存比如 Memcached 和 Redis 是应用程序和数据存储之间的一种键值存储。由于数据保存在 RAM 中它比存储在磁盘上的典型数据库要快多了。RAM 比磁盘限制更多,所以例如 [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 的[缓存无效算法](https://en.wikipedia.org/wiki/Cache_algorithms)可以将「热门数据」放在 RAM 中,而对一些比较「冷门」的数据不做处理。
Redis 有下列附加功能:
@ -1176,12 +1176,12 @@ Redis 有下列附加功能:
- 返回所需内容
```python
def get_user(self, user_id) :
user = cache.get("user.{0}", user_id)
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@ -1213,15 +1213,15 @@ def get_user(self, user_id) :
应用代码:
```
set_user(12345, {"foo":"bar"})
set_user(12345, {"foo":"bar"})
```
缓存代码:
```python
def set_user(user_id, values) :
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
```
由于存写操作所以直写模式整体是一种很慢的操作,但是读取刚写入的数据很快。相比读取数据,用户通常比较能接受更新数据时速度较慢。缓存中的数据不会过时。
@ -1267,18 +1267,18 @@ def set_user(user_id, values) :
### 缓存的缺点:
- 需要保持缓存和真实数据源之间的一致性,比如数据库根据[缓存无效](https://en.wikipedia.org/wiki/Cache_algorithms)
- 需要保持缓存和真实数据源之间的一致性,比如数据库根据[缓存无效](https://en.wikipedia.org/wiki/Cache_algorithms)。
- 需要改变应用程序比如增加 Redis 或者 memcached。
- 无效缓存是个难题,什么时候更新缓存是与之相关的复杂问题。
### 相关资源和延伸阅读
- [从缓存到内存数据](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
- [可扩展系统设计模式](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
- [可缩放系统构架介绍](http://lethain.com/introduction-to-architecting-systems-for-scale/)
- [可扩展性,可用性,稳定性和模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
- [可扩展性](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
- [AWS ElastiCache 策略](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
- [从缓存到内存数据](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
- [可扩展系统设计模式](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
- [可缩放系统构架介绍](http://lethain.com/introduction-to-architecting-systems-for-scale/)
- [可扩展性,可用性,稳定性和模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
- [可扩展性](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
- [AWS ElastiCache 策略](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
- [维基百科](https://en.wikipedia.org/wiki/Cache_(computing))
## 异步
@ -1314,7 +1314,7 @@ def set_user(user_id, values) :
### 背压
如果队列开始明显增长,那么队列大小可能会超过内存大小,导致高速缓存未命中,磁盘读取,甚至性能更慢。[背压](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) 可以通过限制队列大小来帮助我们,从而为队列中的作业保持高吞吐率和良好的响应时间。一旦队列填满,客户端将得到服务器忙或者 HTTP 503 状态码,以便稍后重试。客户端可以在稍后时间重试该请求,也许是[指数退避](https://en.wikipedia.org/wiki/Exponential_backoff)
如果队列开始明显增长,那么队列大小可能会超过内存大小,导致高速缓存未命中,磁盘读取,甚至性能更慢。[背压](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)可以通过限制队列大小来帮助我们,从而为队列中的作业保持高吞吐率和良好的响应时间。一旦队列填满,客户端将得到服务器忙或者 HTTP 503 状态码,以便稍后重试。客户端可以在稍后时间重试该请求,也许是[指数退避](https://en.wikipedia.org/wiki/Exponential_backoff)。
### 异步的缺点:
@ -1322,10 +1322,10 @@ def set_user(user_id, values) :
### 相关资源和延伸阅读
- [这是一个数字游戏](https://www.youtube.com/watch?v=1KRYH75wgy4)
- [超载时应用背压](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
- [利特尔法则](https://en.wikipedia.org/wiki/Little%27s_law)
- [消息队列与任务队列有什么区别?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
- [这是一个数字游戏](https://www.youtube.com/watch?v=1KRYH75wgy4)
- [超载时应用背压](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
- [利特尔法则](https://en.wikipedia.org/wiki/Little%27s_law)
- [消息队列与任务队列有什么区别?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## 通讯
@ -1357,10 +1357,10 @@ HTTP 是依赖于较低级协议(如 **TCP** 和 **UDP**)的应用层协议
#### 来源及延伸阅读HTTP
* [README](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol) +
* [HTTP 是什么?](https://www.nginx.com/resources/glossary/http/)
* [HTTP 和 TCP 的区别](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT 和 PATCH的区别](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
* [README](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol) +
* [HTTP 是什么?](https://www.nginx.com/resources/glossary/http/)
* [HTTP 和 TCP 的区别](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT 和 PATCH的区别](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
### 传输控制协议TCP
@ -1370,12 +1370,12 @@ HTTP 是依赖于较低级协议(如 **TCP** 和 **UDP**)的应用层协议
<strong><a href="http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/">资料来源:如何制作多人游戏</a></strong>
</p>
TCP 是通过 [IP 网络](https://en.wikipedia.org/wiki/Internet_Protocol) 的面向连接的协议。 使用[握手](https://en.wikipedia.org/wiki/Handshaking) 建立和断开连接。 发送的所有数据包保证以原始顺序到达目的地,用以下措施保证数据包不被损坏:
TCP 是通过 [IP 网络](https://en.wikipedia.org/wiki/Internet_Protocol)的面向连接的协议。 使用[握手](https://en.wikipedia.org/wiki/Handshaking)建立和断开连接。 发送的所有数据包保证以原始顺序到达目的地,用以下措施保证数据包不被损坏:
- 每个数据包的序列号和[校验码](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation)
- 每个数据包的序列号和[校验码](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation)。
- [确认包](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks))和自动重传
如果发送者没有收到正确的响应它将重新发送数据包。如果多次超时连接就会断开。TCP 实行[流量控制](https://en.wikipedia.org/wiki/Flow_control_(data))和[拥塞控制](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) 。这些确保措施会导致延迟,而且通常导致传输效率比 UDP 低。
如果发送者没有收到正确的响应它将重新发送数据包。如果多次超时连接就会断开。TCP 实行[流量控制](https://en.wikipedia.org/wiki/Flow_control_(data))和[拥塞控制](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control)。这些确保措施会导致延迟,而且通常导致传输效率比 UDP 低。
为了确保高吞吐量Web 服务器可以保持大量的 TCP 连接,从而导致高内存使用。在 Web 服务器线程间拥有大量开放连接可能开销巨大,消耗资源过多,也就是说,一个 [memcached](#memcached) 服务器。[连接池](https://en.wikipedia.org/wiki/Connection_pool) 可以帮助除了在适用的情况下切换到 UDP。
@ -1408,12 +1408,12 @@ UDP 可靠性更低但适合用在网络电话、视频聊天,流媒体和实
#### 来源及延伸阅读TCP 与 UDP
* [游戏编程的网络](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP 与 UDP 的关键区别](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP 与 UDP 的不同](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [传输控制协议](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [用户数据报协议](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Memcache 在 Facebook 的扩展](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
* [游戏编程的网络](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP 与 UDP 的关键区别](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP 与 UDP 的不同](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [传输控制协议](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [用户数据报协议](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Memcache 在 Facebook 的扩展](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
### 远程过程调用协议RPC
@ -1423,7 +1423,7 @@ UDP 可靠性更低但适合用在网络电话、视频聊天,流媒体和实
<strong><a href="http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview">Source: Crack the system design interview</a></strong>
</p>
在 RPC 中,客户端会去调用另一个地址空间(通常是一个远程服务器)里的方法。调用代码看起来就像是调用的是一个本地方法,客户端和服务器交互的具体过程被抽象。远程调用相对于本地调用一般较慢而且可靠性更差,因此区分两者是有帮助的。热门的 RPC 框架包括 [Protobuf](https://developers.google.com/protocol-buffers/) 、[Thrift](https://thrift.apache.org/) 和 [Avro](https://avro.apache.org/docs/current/)
在 RPC 中,客户端会去调用另一个地址空间(通常是一个远程服务器)里的方法。调用代码看起来就像是调用的是一个本地方法,客户端和服务器交互的具体过程被抽象。远程调用相对于本地调用一般较慢而且可靠性更差,因此区分两者是有帮助的。热门的 RPC 框架包括 [Protobuf](https://developers.google.com/protocol-buffers/)、[Thrift](https://thrift.apache.org/) 和 [Avro](https://avro.apache.org/docs/current/)。
RPC 是一个“请求-响应”协议:
@ -1462,7 +1462,7 @@ RPC 专注于暴露方法。RPC 通常用于处理内部通讯的性能问题,
* RPC 客户端与服务实现捆绑地很紧密。
* 一个新的 API 必须在每一个操作或者用例中定义。
* RPC 很难调试。
* 你可能没办法很方便的去修改现有的技术。举个例子,如果你希望在 [Squid](http://www.squid-cache.org/) 这样的缓存服务器上确保 [RPC 被正确缓存](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) 的话可能需要一些额外的努力了。
* 你可能没办法很方便的去修改现有的技术。举个例子,如果你希望在 [Squid](http://www.squid-cache.org/) 这样的缓存服务器上确保 [RPC 被正确缓存](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)的话可能需要一些额外的努力了。
### 表述性状态转移REST
@ -1473,7 +1473,7 @@ RESTful 接口有四条规则:
* **标志资源HTTP 里的 URI** ── 无论什么操作都使用同一个 URI。
* **表示的改变HTTP 的动作)** ── 使用动作, headers 和 body。
* **可自我描述的错误信息HTTP 中的 status code** ── 使用状态码,不要重新造轮子。
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) HTTP 中的HTML 接口)** ── 你的 web 服务器应该能够通过浏览器访问。
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/)HTTP 中的HTML 接口)** ── 你的 web 服务器应该能够通过浏览器访问。
REST 请求的例子:
@ -1484,7 +1484,7 @@ PUT /someresources/anId
{"anotherdata": "another value"}
```
REST 关注于暴露数据。它减少了客户端/服务端的耦合程度,经常用于公共 HTTP API 接口设计。REST 使用更通常与规范化的方法来通过 URI 暴露资源,[通过 header 来表述](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) 并通过 GET、POST、PUT、DELETE 和 PATCH 这些动作来进行操作。因为无状态的特性REST 易于横向扩展和隔离。
REST 关注于暴露数据。它减少了客户端/服务端的耦合程度,经常用于公共 HTTP API 接口设计。REST 使用更通常与规范化的方法来通过 URI 暴露资源,[通过 header 来表述](https://github.com/for-GET/know-your-http-well/blob/master/headers.md)并通过 GET、POST、PUT、DELETE 和 PATCH 这些动作来进行操作。因为无状态的特性REST 易于横向扩展和隔离。
#### 缺点REST
@ -1511,34 +1511,34 @@ REST 关注于暴露数据。它减少了客户端/服务端的耦合程度,
#### 来源及延伸阅读REST 与 RPC
* [你真的知道你为什么更喜欢 REST 而不是 RPC 吗](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [什么时候 RPC 比 REST 更合适?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭开 RPC 和 REST 的神秘面纱](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺点是什么](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [为什么在内部使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
* [你真的知道你为什么更喜欢 REST 而不是 RPC 吗](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [什么时候 RPC 比 REST 更合适?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭开 RPC 和 REST 的神秘面纱](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺点是什么](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [破解系统设计面试](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [为什么在内部使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## 安全
这一部分需要更多内容。[一起来吧](#贡献)
这一部分需要更多内容。[一起来吧](#贡献)
安全是一个宽泛的话题。除非你有相当的经验、安全方面背景或者正在申请的职位要求安全知识,你不需要了解安全基础知识以外的内容:
* 在运输和等待过程中加密
* 对所有的用户输入和从用户那里发来的参数进行处理以防止 [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) 和 [SQL 注入](https://en.wikipedia.org/wiki/SQL_injection)
* 对所有的用户输入和从用户那里发来的参数进行处理以防止 [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) 和 [SQL 注入](https://en.wikipedia.org/wiki/SQL_injection)。
* 使用参数化的查询来防止 SQL 注入。
* 使用[最小权限原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
* 使用[最小权限原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege)。
### 来源及延伸阅读
* [为开发者准备的安全引导](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
* [为开发者准备的安全引导](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## 附录
一些时候你会被要求做出保守估计。比如,你可能需要估计从磁盘中生成 100 张图片的缩略图需要的时间或者一个数据结构需要多少的内存。**2 的次方表**和**每个开发者都需要知道的一些时间数据**译注OSChina 上有这篇文章的[译文](https://www.oschina.net/news/30009/every-programmer-should-know) )都是一些很方便的参考资料。
一些时候你会被要求做出保守估计。比如,你可能需要估计从磁盘中生成 100 张图片的缩略图需要的时间或者一个数据结构需要多少的内存。**2 的次方表**和**每个开发者都需要知道的一些时间数据**译注OSChina 上有这篇文章的[译文](https://www.oschina.net/news/30009/every-programmer-should-know))都是一些很方便的参考资料。
### 2 的次方表
@ -1557,7 +1557,7 @@ Power Exact Value Approx Value Bytes
#### 来源及延伸阅读
* [2 的次方](https://en.wikipedia.org/wiki/Power_of_two)
* [2 的次方](https://en.wikipedia.org/wiki/Power_of_two)
### 每个程序员都应该知道的延迟数
@ -1597,14 +1597,14 @@ Notes
#### 延迟数可视化
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
#### 来源及延伸阅读
* [每个程序员都应该知道的延迟数 — 1](https://gist.github.com/jboner/2841832)
* [每个程序员都应该知道的延迟数 — 2](https://gist.github.com/hellerbarde/2843375)
* [关于建设大型分布式系统的的设计方案、课程和建议](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [关于建设大型可拓展分布式系统的软件工程咨询](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
* [每个程序员都应该知道的延迟数 — 1](https://gist.github.com/jboner/2841832)
* [每个程序员都应该知道的延迟数 — 2](https://gist.github.com/hellerbarde/2843375)
* [关于建设大型分布式系统的的设计方案、课程和建议](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [关于建设大型可拓展分布式系统的软件工程咨询](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
### 其它的系统设计面试题
@ -1613,27 +1613,27 @@ Notes
| 问题 | 引用 |
| ----------------------- | ---------------------------------------- |
| 设计类似于 Dropbox 的文件同步服务 | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| 设计类似于 Google 的搜索引擎 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) <br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) <br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) <br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| 设计类似于 Google 的搜索引擎 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407)<br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search)<br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/)<br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| 设计类似于 Google 的可扩展网络爬虫 | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| 设计 Google 文档 | [code.google.com](https://code.google.com/p/google-mobwrite/) <br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| 设计 Google 文档 | [code.google.com](https://code.google.com/p/google-mobwrite/)<br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| 设计类似 Redis 的键值存储 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| 设计类似 Memcached 的缓存系统 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| 设计类似亚马逊的推荐系统 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) <br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| 设计类似亚马逊的推荐系统 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html)<br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| 设计类似 Bitly 的短链接系统 | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| 设计类似 WhatsApp 的聊天应用 | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| 设计类似 Instagram 的图片分享系统 | [highscalability.com](http://highscalability.com/flickr-architecture) <br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| 设计 Facebook 的新闻推荐方法 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) <br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) <br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| 设计 Facebook 的时间线系统 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) <br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| 设计 Facebook 的聊天系统 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) <br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| 设计类似 Facebook 的图表搜索系统 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| 设计类似 Instagram 的图片分享系统 | [highscalability.com](http://highscalability.com/flickr-architecture)<br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| 设计 Facebook 的新闻推荐方法 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed)<br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed)<br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| 设计 Facebook 的时间线系统 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920)<br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| 设计 Facebook 的聊天系统 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf)<br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| 设计类似 Facebook 的图表搜索系统 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| 设计类似 CloudFlare 的内容传递网络 | [cmu.edu](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci) |
| 设计类似 Twitter 的热门话题系统 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) <br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| 设计一个随机 ID 生成系统 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) <br/>[github.com](https://github.com/twitter/snowflake/) |
| 返回一定时间段内次数前 k 高的请求 | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf) <br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| 设计类似 Twitter 的热门话题系统 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/)<br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| 设计一个随机 ID 生成系统 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake)<br/>[github.com](https://github.com/twitter/snowflake/) |
| 返回一定时间段内次数前 k 高的请求 | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf)<br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| 设计一个数据源于多个数据中心的服务系统 | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| 设计一个多人网络卡牌游戏 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) <br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| 设计一个垃圾回收系统 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) <br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| 添加更多的系统设计问题 | [贡献](#贡献) |
| 设计一个多人网络卡牌游戏 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html)<br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| 设计一个垃圾回收系统 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/)<br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| 添加更多的系统设计问题 | [贡献](#贡献) |
### 真实架构
@ -1666,18 +1666,18 @@ Notes
| Data store | **Memcached** - 分布式内存缓存系统 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Data store | **Redis** - 能够持久化及具有值类型的分布式内存缓存系统 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
| File system | **Google File System (GFS) ** - 分布式文件系统 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| File system | **Hadoop File System (HDFS) ** - GFS 的开源实现 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| File system | **Google File System (GFS)** - 分布式文件系统 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| File system | **Hadoop File System (HDFS)** - GFS 的开源实现 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| | | |
| Misc | **Chubby** - Google 的分布式系统的低耦合锁服务 | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
| Misc | **Dapper** - 分布式系统跟踪基础设施 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf) |
| Misc | **Kafka** - LinkedIn 的发布订阅消息系统 | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| Misc | **Zookeeper** - 集中的基础架构和协调服务 | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | 添加更多 | [贡献](#贡献) |
| | 添加更多 | [贡献](#贡献) |
### 公司的系统架构
| Company | Reference(s) |
| Company | Reference(s) |
| -------------- | ---------------------------------------- |
| Amazon | [Amazon 的架构](http://highscalability.com/amazon-architecture) |
| Cinchcast | [每天产生 1500 小时的音频](http://highscalability.com/blog/2012/7/16/cinchcast-architecture-producing-1500-hours-of-audio-every-d.html) |
@ -1685,22 +1685,22 @@ Notes
| DropBox | [我们如何缩放 Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [每秒操作 100000 次](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google 的架构](http://highscalability.com/google-architecture) |
| Instagram | [1400 万用户,达到兆级别的照片存储](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) <br/>[是什么在驱动 Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Instagram | [1400 万用户,达到兆级别的照片存储](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html)<br/>[是什么在驱动 Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv 的直播广播架构](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Facebook 的可扩展 memcached](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) <br/>[TAO: Facebook 社交图的分布式数据存储](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) <br/>[Facebook 的图片存储](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Facebook | [Facebook 的可扩展 memcached](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf)<br/>[TAO: Facebook 社交图的分布式数据存储](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf)<br/>[Facebook 的图片存储](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Flickr | [Flickr 的架构](http://highscalability.com/flickr-architecture) |
| Mailbox | [在 6 周内从 0 到 100 万用户](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Pinterest | [从零到每月数十亿的浏览量](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) <br/>[1800 万访问用户10 倍增长12 名员工](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Pinterest | [从零到每月数十亿的浏览量](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html)<br/>[1800 万访问用户10 倍增长12 名员工](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [月用户量 5000 万并在不断增长](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish 的架构](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [他们每天如何处理 13 亿笔交易](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow 的架构](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [40M 访问者200M 页面浏览量30TB 数据](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [每月 150 亿的浏览量](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) <br/>[每天使用 MySQL 存储2.5亿条 tweet](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) <br/>[150M 活跃用户300K QPS22 MB/S 的防火墙](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) <br/>[可扩展时间表](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) <br/>[Twitter 的大小数据](https://www.youtube.com/watch?v=5cKTP36HVgI) <br/>[Twitter 的行为:规模超过 1 亿用户](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster)<br/>[每天使用 MySQL 存储2.5亿条 tweet](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html)<br/>[150M 活跃用户300K QPS22 MB/S 的防火墙](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)<br/>[可扩展时间表](https://www.infoq.com/presentations/Twitter-Timeline-Scalability)<br/>[Twitter 的大小数据](https://www.youtube.com/watch?v=5cKTP36HVgI)<br/>[Twitter 的行为:规模超过 1 亿用户](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Uber | [Uber 如何扩展自己的实时化市场](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) |
| WhatsApp | [Facebook 用 190 亿美元购买 WhatsApp 的架构](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| YouTube | [YouTube 的可扩展性](https://www.youtube.com/watch?v=w5WVu624fY8) <br/>[YouTube 的架构](http://highscalability.com/youtube-architecture) |
| YouTube | [YouTube 的可扩展性](https://www.youtube.com/watch?v=w5WVu624fY8)<br/>[YouTube 的架构](http://highscalability.com/youtube-architecture) |
### 公司工程博客
@ -1708,60 +1708,60 @@ Notes
>
> 你面对的问题可能就来自于同样领域
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
#### 来源及延伸阅读
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
## 正在完善中
有兴趣加入添加一些部分或者帮助完善某些部分吗?[加入进来吧](#贡献)
有兴趣加入添加一些部分或者帮助完善某些部分吗?[加入进来吧](#贡献)
* 使用 MapReduce 进行分布式计算
* 一致性哈希
* 直接存储器访问DMA控制器
* [贡献](#贡献)
* [贡献](#贡献)
## 致谢
@ -1769,24 +1769,24 @@ Notes
特别鸣谢:
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## 联系方式
欢迎联系我讨论本文的不足、问题或者意见。
可以在我的 [GitHub 主页](https://github.com/donnemartin) 上找到我的联系方式
可以在我的 [GitHub 主页](https://github.com/donnemartin)上找到我的联系方式
## 许可
Creative Commons Attribution 4.0 International License (CC BY 4.0)
Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/

View File

@ -1,4 +1,4 @@
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
# 系統設計入門
@ -35,11 +35,11 @@
關於面試的其他主題:
* [學習指南](#學習指南)
* [如何解決一個系統設計的面試題目](#如何解決一個系統設計的面試題目)
* [系統設計面試問題與**解答**](#系統設計面試問題與解答)
* [物件導向設計問題與**解答**](#物件導向設計面試問題與解答)
* [其他的系統設計面試問題](#其他的系統設計面試問題)
* [學習指南](#學習指南)
* [如何解決一個系統設計的面試題目](#如何解決一個系統設計的面試題目)
* [系統設計面試問題與**解答**](#系統設計面試問題與解答)
* [物件導向設計問題與**解答**](#物件導向設計面試問題與解答)
* [其他的系統設計面試問題](#其他的系統設計面試問題)
## 學習單字卡
@ -48,26 +48,26 @@
<br/>
</p>
底下提供的[學習單字卡](https://apps.ankiweb.net/) 以每隔一段時間間隔出現的方式,幫助你學習系統設計的概念。
底下提供的[學習單字卡](https://apps.ankiweb.net/)以每隔一段時間間隔出現的方式,幫助你學習系統設計的概念。
* [系統設計單字卡](resources/flash_cards/System%20Design.apkg)
* [系統設計練習單字卡](resources/flash_cards/System%20Design%20Exercises.apkg)
* [物件導向設計練習單字卡](resources/flash_cards/OO%20Design.apkg)
* [系統設計單字卡](resources/flash_cards/System%20Design.apkg)
* [系統設計練習單字卡](resources/flash_cards/System%20Design%20Exercises.apkg)
* [物件導向設計練習單字卡](resources/flash_cards/OO%20Design.apkg)
這些是非常棒的學習資源,隨時都可以使用。
### 程式設計學習資源:互動式程式學習設計
你正在尋找資源來面對[**程式語言面試**](https://github.com/donnemartin/interactive-coding-challenges) 嗎?
你正在尋找資源來面對[**程式語言面試**](https://github.com/donnemartin/interactive-coding-challenges)嗎?
<p align="center">
<img src="images/b4YtAEN.png">
<br/>
</p>
請參考 [**互動程式語言學習挑戰**](https://github.com/donnemartin/interactive-coding-challenges) ,當中還包含了底下的學習單字卡:
請參考 [**互動程式語言學習挑戰**](https://github.com/donnemartin/interactive-coding-challenges),當中還包含了底下的學習單字卡:
* [程式語言學習單卡](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
* [程式語言學習單卡](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## 如何貢獻
@ -78,11 +78,11 @@
* 修正錯誤
* 改善章節內容
* 增加新的章節
* [翻譯](https://github.com/donnemartin/system-design-primer/issues/28)
* [翻譯](https://github.com/donnemartin/system-design-primer/issues/28)
某些還需要再完善的章節放在 [修正中](#仍在進行中)
某些還需要再完善的章節放在 [修正中](#仍在進行中)。
請參考 [貢獻指南](CONTRIBUTING.md)
請參考 [貢獻指南](CONTRIBUTING.md)。
## 系統設計主題的索引
@ -95,92 +95,92 @@
<br/>
</p>
* [系統設計主題:從這裡開始](#系統設計主題從這裡開始)
* [第一步:複習關於可擴展性的影片講座](#第一步複習關於可擴展性的影片講座)
* [第二步:複習關於可擴展性的文章](#第二步複習關於可擴展性的文章)
* [下一步](#下一步)
* [效能與可擴展性](#效能與可擴展性)
* [延遲與吞吐量](#延遲與吞吐量)
* [可用性與一致性](#可用性與一致性)
* [CAP 理論](#cap-理論)
* [CP-一致性與部分容錯性](#cp-一致性與部分容錯性)
* [AP-可用性與部分容錯性](#ap-可用性與部分容錯性)
* [一致性模式](#一致性模式)
* [弱一致性](#弱一致性)
* [最終一致性](#最終一致性)
* [強一致性](#強一致性)
* [可用性模式](#可用性模式)
* [容錯轉移](#容錯轉移)
* [複寫機制](#複寫機制)
* [域名系統](#域名系統)
* [內容傳遞網路(CDN) ](#內容傳遞網路cdn)
* [推送式 CDNs](#推送式-cdns)
* [拉取式 CDNs](#拉取式-cdns)
* [負載平衡器](#負載平衡器)
* [主動到備用切換模式(AP Mode) ](#主動到備用切換模式ap-mode)
* [雙主動切換模式(AA Mode) ](#雙主動切換模式aa-mode)
* [第四層負載平衡](#第四層負載平衡)
* [第七層負載平衡](#第七層負載平衡)
* [水平擴展](#水平擴展)
* [反向代理(網頁伺服器) ](#反向代理網頁伺服器)
* [負載平衡器與反向代理伺服器](#負載平衡器與反向代理伺服器)
* [應用層](#應用層)
* [微服務](#微服務)
* [服務發現](#服務發現)
* [資料庫](#資料庫)
* [關連式資料庫管理系統(RDBMS) ](#關連式資料庫管理系統rdbms)
* [主從複寫](#主從複寫)
* [主動模式複寫](#主動模式複寫)
* [聯邦式資料庫](#聯邦式資料庫)
* [分片](#分片)
* [反正規化](#反正規化)
* [SQL 優化](#sql-優化)
* [NoSQL](#nosql)
* [鍵-值對的資料庫](#鍵-值對的資料庫)
* [文件類型資料庫](#文件類型資料庫)
* [列儲存型資料庫](#列儲存型資料庫)
* [圖形資料庫](#圖形資料庫)
* [SQL 或 NoSQL](#sql-或-nosql)
* [快取](#快取)
* [客戶端快取](#客戶端快取)
* [CDN 快取](#cdn-快取)
* [網站伺服器快取](#網站伺服器快取)
* [資料庫快取](#資料庫快取)
* [應用程式快取](#應用程式快取)
* [資料庫查詢級別的快取](#資料庫查詢級別的快取)
* [物件級別的快取](#物件級別的快取)
* [什麼時候要更新快取](#什麼時候要更新快取)
* [快取模式](#快取模式)
* [寫入模式](#寫入模式)
* [事後寫入(回寫) ](#事後寫入回寫)
* [更新式快取](#更新式快取)
* [非同步機制](#非同步機制)
* [訊息佇列](#訊息佇列)
* [工作佇列](#工作佇列)
* [背壓機制](#背壓機制)
* [通訊](#通訊)
* [傳輸控制通訊協定(TCP) ](#傳輸控制通訊協定tcp)
* [使用者資料流通訊協定 (UDP) ](#使用者資料流通訊協定-udp)
* [遠端程式呼叫 (RPC) ](#遠端程式呼叫-rpc)
* [具象狀態轉移 (REST) ](#具象狀態轉移-rest)
* [資訊安全](#資訊安全)
* [附錄](#附錄)
* [2 的次方表](#2-的次方表)
* [每個開發者都應該知道的延遲數量級](#每個開發者都應該知道的延遲數量級)
* [其他的系統設計面試問題](#其他的系統設計面試問題)
* [真實世界的架構](#真實世界的架構)
* [公司的系統架構](#公司的系統架構)
* [公司的工程部落格](#公司的工程部落格)
* [仍在進行中](#仍在進行中)
* [致謝](#致謝)
* [聯絡資訊](#聯絡資訊)
* [授權](#授權)
* [系統設計主題:從這裡開始](#系統設計主題從這裡開始)
* [第一步:複習關於可擴展性的影片講座](#第一步複習關於可擴展性的影片講座)
* [第二步:複習關於可擴展性的文章](#第二步複習關於可擴展性的文章)
* [下一步](#下一步)
* [效能與可擴展性](#效能與可擴展性)
* [延遲與吞吐量](#延遲與吞吐量)
* [可用性與一致性](#可用性與一致性)
* [CAP 理論](#cap-理論)
* [CP-一致性與部分容錯性](#cp-一致性與部分容錯性)
* [AP-可用性與部分容錯性](#ap-可用性與部分容錯性)
* [一致性模式](#一致性模式)
* [弱一致性](#弱一致性)
* [最終一致性](#最終一致性)
* [強一致性](#強一致性)
* [可用性模式](#可用性模式)
* [容錯轉移](#容錯轉移)
* [複寫機制](#複寫機制)
* [域名系統](#域名系統)
* [內容傳遞網路(CDN)](#內容傳遞網路cdn)
* [推送式 CDNs](#推送式-cdns)
* [拉取式 CDNs](#拉取式-cdns)
* [負載平衡器](#負載平衡器)
* [主動到備用切換模式(AP Mode)](#主動到備用切換模式ap-mode)
* [雙主動切換模式(AA Mode)](#雙主動切換模式aa-mode)
* [第四層負載平衡](#第四層負載平衡)
* [第七層負載平衡](#第七層負載平衡)
* [水平擴展](#水平擴展)
* [反向代理(網頁伺服器)](#反向代理網頁伺服器)
* [負載平衡器與反向代理伺服器](#負載平衡器與反向代理伺服器)
* [應用層](#應用層)
* [微服務](#微服務)
* [服務發現](#服務發現)
* [資料庫](#資料庫)
* [關連式資料庫管理系統(RDBMS)](#關連式資料庫管理系統rdbms)
* [主從複寫](#主從複寫)
* [主動模式複寫](#主動模式複寫)
* [聯邦式資料庫](#聯邦式資料庫)
* [分片](#分片)
* [反正規化](#反正規化)
* [SQL 優化](#sql-優化)
* [NoSQL](#nosql)
* [鍵-值對的資料庫](#鍵-值對的資料庫)
* [文件類型資料庫](#文件類型資料庫)
* [列儲存型資料庫](#列儲存型資料庫)
* [圖形資料庫](#圖形資料庫)
* [SQL 或 NoSQL](#sql-或-nosql)
* [快取](#快取)
* [客戶端快取](#客戶端快取)
* [CDN 快取](#cdn-快取)
* [網站伺服器快取](#網站伺服器快取)
* [資料庫快取](#資料庫快取)
* [應用程式快取](#應用程式快取)
* [資料庫查詢級別的快取](#資料庫查詢級別的快取)
* [物件級別的快取](#物件級別的快取)
* [什麼時候要更新快取](#什麼時候要更新快取)
* [快取模式](#快取模式)
* [寫入模式](#寫入模式)
* [事後寫入(回寫)](#事後寫入回寫)
* [更新式快取](#更新式快取)
* [非同步機制](#非同步機制)
* [訊息佇列](#訊息佇列)
* [工作佇列](#工作佇列)
* [背壓機制](#背壓機制)
* [通訊](#通訊)
* [傳輸控制通訊協定(TCP)](#傳輸控制通訊協定tcp)
* [使用者資料流通訊協定 (UDP)](#使用者資料流通訊協定-udp)
* [遠端程式呼叫 (RPC)](#遠端程式呼叫-rpc)
* [具象狀態轉移 (REST)](#具象狀態轉移-rest)
* [資訊安全](#資訊安全)
* [附錄](#附錄)
* [2 的次方表](#2-的次方表)
* [每個開發者都應該知道的延遲數量級](#每個開發者都應該知道的延遲數量級)
* [其他的系統設計面試問題](#其他的系統設計面試問題)
* [真實世界的架構](#真實世界的架構)
* [公司的系統架構](#公司的系統架構)
* [公司的工程部落格](#公司的工程部落格)
* [仍在進行中](#仍在進行中)
* [致謝](#致謝)
* [聯絡資訊](#聯絡資訊)
* [授權](#授權)
## 學習指南
> 基於你面試的時間 (短、中、長) 來複習這些建議的主題。
![Imgur](images/OfVllex.png)
![Imgur](images/OfVllex.png)
**Q: 對於面試者來說,我需要知道這裡所有的知識嗎?**
@ -206,11 +206,11 @@
|---------------------------------------------------------------------------------|------|------|--------|
| 閱讀 [系統設計主題的索引](#系統設計主題的索引) 來取得關於系統如何運作的廣泛知識 | :+1: | :+1: | :+1: |
| 閱讀一些你要面試的 [公司的工程部落格](#公司的工程部落格) 文章 | :+1: | :+1: | :+1: |
| 閱讀關於 [真實世界的架構](#真實世界的架構) | :+1: | :+1: | :+1: |
| 複習 [如何解決一個系統設計的面試題目](#如何解決一個系統設計的面試題目) | :+1: | :+1: | :+1: |
| 完成 [系統設計面試題目與解答](#系統設計面試問題與解答) | 一些 | 很多 | 大部分 |
| 完成 [物件導向設計與解答](#物件導向設計面試問題與解答) | 一些 | 很多 | 大部分 |
| 複習 [其他的系統設計面試問題](#其他的系統設計面試問題) | 一些 | 很多 | 大部分 |
| 閱讀關於 [真實世界的架構](#真實世界的架構) | :+1: | :+1: | :+1: |
| 複習 [如何解決一個系統設計的面試題目](#如何解決一個系統設計的面試題目) | :+1: | :+1: | :+1: |
| 完成 [系統設計面試題目與解答](#系統設計面試問題與解答) | 一些 | 很多 | 大部分 |
| 完成 [物件導向設計與解答](#物件導向設計面試問題與解答) | 一些 | 很多 | 大部分 |
| 複習 [其他的系統設計面試問題](#其他的系統設計面試問題) | 一些 | 很多 | 大部分 |
## 如何解決一個系統設計的面試題目
@ -245,7 +245,7 @@
對每一個核心元件進行深入的分析。舉例來說, 如果你被問到 [設計一個短網址的服務](solutions/system_design/pastebin/README.md) ,可以開始討論以下內容:
* 產生並儲存一個完整網址的 Hash
* [MD5](solutions/system_design/pastebin/README.md) 和 [Base62](solutions/system_design/pastebin/README.md)
* [MD5](solutions/system_design/pastebin/README.md) 和 [Base62](solutions/system_design/pastebin/README.md)
* Hash 碰撞
* SQL 或 NoSQL
* 資料庫的模型
@ -268,17 +268,17 @@
你可能被要求針對你的設計進行一些估算,可以參考 [附錄](#附錄) 的一些資源:
* [使用快速估算法](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2 的次方表](#2-的次方表)
* [每個開發者都應該知道的延遲數量級](#每個開發者都應該知道的延遲數量級)
* [使用快速估算法](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [2 的次方表](#2-的次方表)
* [每個開發者都應該知道的延遲數量級](#每個開發者都應該知道的延遲數量級)
### 相關資源與延伸閱讀
查看以下的連結獲得更好的做法:
* [如何在系統設計的面試中勝出](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [系統設計的面試](http://www.hiredintech.com/system-design)
* [系統架構與設計的面試介紹](https://www.youtube.com/watch?v=ZgdS0EUmn70)
* [如何在系統設計的面試中勝出](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [系統設計的面試](http://www.hiredintech.com/system-design)
* [系統架構與設計的面試介紹](https://www.youtube.com/watch?v=ZgdS0EUmn70)
## 系統設計面試問題與解答
@ -288,63 +288,63 @@
| 問題 | |
|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------|
| 設計 Pastebin.com (或 Bit.ly) | [解答](solutions/system_design/pastebin/README.md) |
| 設計一個像是 Twitter 的 timeline (或 Facebook feed) 設計一個 Twitter 搜尋功能 (or Facebook 搜尋功能) | [解答](solutions/system_design/twitter/README.md) |
| 設計一個爬蟲系統 | [解答](solutions/system_design/web_crawler/README.md) |
| 設計 Mint.com 網站 | [解答](solutions/system_design/mint/README.md) |
| 設計 Pastebin.com (或 Bit.ly) | [解答](solutions/system_design/pastebin/README.md) |
| 設計一個像是 Twitter 的 timeline (或 Facebook feed)設計一個 Twitter 搜尋功能 (or Facebook 搜尋功能) | [解答](solutions/system_design/twitter/README.md) |
| 設計一個爬蟲系統 | [解答](solutions/system_design/web_crawler/README.md) |
| 設計 Mint.com 網站 | [解答](solutions/system_design/mint/README.md) |
| 設計一個社交網站的資料結構 | [解答](solutions/system_design/social_graph/README.md) |
| 設計一個搜尋引擎使用的鍵值儲存資料結構 | [解答](solutions/system_design/query_cache/README.md) |
| 設計一個根據產品分類的亞馬遜銷售排名 | [解答](solutions/system_design/sales_rank/README.md) |
| 在 AWS 上設計一個百萬用戶等級的系統 | [解答](solutions/system_design/scaling_aws/README.md) |
| 增加一個系統設計的問題 | [貢獻](#如何貢獻) |
| 設計一個搜尋引擎使用的鍵值儲存資料結構 | [解答](solutions/system_design/query_cache/README.md) |
| 設計一個根據產品分類的亞馬遜銷售排名 | [解答](solutions/system_design/sales_rank/README.md) |
| 在 AWS 上設計一個百萬用戶等級的系統 | [解答](solutions/system_design/scaling_aws/README.md) |
| 增加一個系統設計的問題 | [貢獻](#如何貢獻) |
### 設計 Pastebin.com (或 Bit.ly)
### 設計 Pastebin.com (或 Bit.ly)
[閱讀練習與解答](solutions/system_design/pastebin/README.md)
[閱讀練習與解答](solutions/system_design/pastebin/README.md)
![Imgur](images/4edXG0T.png)
![Imgur](images/4edXG0T.png)
### 設計一個像是 Twitter 的 timeline (或 Facebook feed) 設計一個 Twitter 搜尋功能 (or Facebook 搜尋功能)
### 設計一個像是 Twitter 的 timeline (或 Facebook feed)設計一個 Twitter 搜尋功能 (or Facebook 搜尋功能)
[閱讀練習與解答](solutions/system_design/twitter/README.md)
[閱讀練習與解答](solutions/system_design/twitter/README.md)
![Imgur](images/jrUBAF7.png)
![Imgur](images/jrUBAF7.png)
### 設計一個爬蟲系統
[閱讀練習與解答](solutions/system_design/web_crawler/README.md)
[閱讀練習與解答](solutions/system_design/web_crawler/README.md)
![Imgur](images/bWxPtQA.png)
![Imgur](images/bWxPtQA.png)
### 設計 Mint.com 網站
[閱讀練習與解答](solutions/system_design/mint/README.md)
[閱讀練習與解答](solutions/system_design/mint/README.md)
![Imgur](images/V5q57vU.png)
![Imgur](images/V5q57vU.png)
### 設計一個社交網站的資料結構
[閱讀練習與解答](solutions/system_design/social_graph/README.md)
[閱讀練習與解答](solutions/system_design/social_graph/README.md)
![Imgur](images/cdCv5g7.png)
![Imgur](images/cdCv5g7.png)
### 設計一個搜尋引擎使用的鍵值儲存資料結構
[閱讀練習與解答](solutions/system_design/query_cache/README.md)
[閱讀練習與解答](solutions/system_design/query_cache/README.md)
![Imgur](images/4j99mhe.png)
![Imgur](images/4j99mhe.png)
### 設計一個根據產品分類的亞馬遜銷售排名
[閱讀練習與解答](solutions/system_design/sales_rank/README.md)
[閱讀練習與解答](solutions/system_design/sales_rank/README.md)
![Imgur](images/MzExP06.png)
![Imgur](images/MzExP06.png)
### 在 AWS 上設計一個百萬用戶等級的系統
[閱讀練習與解答](solutions/system_design/scaling_aws/README.md)
[閱讀練習與解答](solutions/system_design/scaling_aws/README.md)
![Imgur](images/jj3A5N8.png)
![Imgur](images/jj3A5N8.png)
## 物件導向設計面試問題與解答
@ -356,13 +356,13 @@
| 問題 | |
|--------------------------|----------------------------------------------------------------------------|
| 設計一個 hash map | [解答](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| 設計一個 LRU 快取 | [解答](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| 設計一個客服系統 | [解答](solutions/object_oriented_design/call_center/call_center.ipynb) |
| 設計一個 hash map | [解答](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| 設計一個 LRU 快取 | [解答](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| 設計一個客服系統 | [解答](solutions/object_oriented_design/call_center/call_center.ipynb) |
| 設計一副牌 | [解答](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| 設計一個停車場 | [解答](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| 設計一個環形陣列 | [如何貢獻](#如何貢獻) |
| 增加一個物件導向設計問題 | [如何貢獻](#如何貢獻) |
| 設計一個停車場 | [解答](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| 設計一個環形陣列 | [如何貢獻](#如何貢獻) |
| 增加一個物件導向設計問題 | [如何貢獻](#如何貢獻) |
## 系統設計主題:從這裡開始
@ -372,7 +372,7 @@
### 第一步:複習關於可擴展性的影片講座
[哈佛大學可擴展性的影片](https://www.youtube.com/watch?v=-W9F__D3oY4)
[哈佛大學可擴展性的影片](https://www.youtube.com/watch?v=-W9F__D3oY4)
* 包含以下主題:
* 垂直擴展
@ -384,13 +384,13 @@
### 第二步:複習關於可擴展性的文章
[可擴展性](http://www.lecloud.net/tagged/scalability/chrono)
[可擴展性](http://www.lecloud.net/tagged/scalability/chrono)
* 包含以下主題:
* [複製](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [資料庫](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [快取](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [非同步](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
* [複製](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [資料庫](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [快取](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [非同步](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### 下一步
@ -415,8 +415,8 @@
### 來源及延伸閱讀
* [簡談可擴展性](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [可擴展性、可用性、穩定性與相關模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [簡談可擴展性](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [可擴展性、可用性、穩定性與相關模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## 延遲與吞吐量
@ -428,7 +428,7 @@
### 來源及延伸閱讀
* [了解延遲與吞吐量](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
* [了解延遲與吞吐量](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## 可用性與一致性
@ -460,9 +460,9 @@
### 來源及延伸閱讀
* [複習 CAP 理論](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [簡單的介紹 CAP 理論](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP 問與答](https://github.com/henryr/cap-faq)
* [複習 CAP 理論](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [簡單的介紹 CAP 理論](http://ksat.me/a-plain-english-introduction-to-cap-theorem/)
* [CAP 問與答](https://github.com/henryr/cap-faq)
## 一致性模式
@ -476,7 +476,7 @@
### 最終一致性
在寫入後的讀取操作最終可以看到被寫入的資料(通常在數毫秒內) 。資料透過非同步的方式被複製。
在寫入後的讀取操作最終可以看到被寫入的資料(通常在數毫秒內)。資料透過非同步的方式被複製。
DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高可用的系統中效果很好。
@ -488,7 +488,7 @@ DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高
### 來源及延伸閱讀
* [資料中心的記錄行為](http://snarfed.org/transactions_across_datacenters_io.html)
* [資料中心的記錄行為](http://snarfed.org/transactions_across_datacenters_io.html)
## 可用性模式
@ -496,7 +496,7 @@ DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高
### 容錯轉移
#### 主動到備用切換模式(AP Mode)
#### 主動到備用切換模式(AP Mode)
在這個模式下heartbeat 訊號會在主動和備用的機器中發送,當 heartbeat 中斷時,備用的機器就會切換為主動機器的 IP 位置接替服務。
@ -504,7 +504,7 @@ DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高
這個模式的切換也被稱為主從的切換模式。
#### 雙主動切換模式(AA Mode)
#### 雙主動切換模式(AA Mode)
在此模式下,兩台伺服器都會負責處理流量,流量會在他們之間進行分散負載。
@ -523,8 +523,8 @@ DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高
這一個主題進一步討論了 [資料庫](#資料庫) 部分:
* [主動到備用複寫](#主動到備用複寫)
* [雙主動複寫](#雙主動複寫)
* [主動到備用複寫](#主動到備用複寫)
* [雙主動複寫](#雙主動複寫)
## 域名系統
@ -536,16 +536,16 @@ DNS 或是電子郵件系統使用的就是這種方式,最終一致性在高
DNS 是將域名轉換為 IP 地址的系統。
DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢域名時,你的路由器或 ISP 業者會提供連接到 DNS 伺服器的資訊。較底層的 DNS 伺服器會快取查詢的結果,而這些快取資訊會因為 DNS 的傳遞而逐漸更新。DNS 的結果可以暫存在瀏覽器或操作系統中一段時間,時間的長短取決於 [存活時間(TTL) ](https://en.wikipedia.org/wiki/Time_to_live) 的設定。
DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢域名時,你的路由器或 ISP 業者會提供連接到 DNS 伺服器的資訊。較底層的 DNS 伺服器會快取查詢的結果,而這些快取資訊會因為 DNS 的傳遞而逐漸更新。DNS 的結果可以暫存在瀏覽器或操作系統中一段時間,時間的長短取決於 [存活時間(TTL)](https://en.wikipedia.org/wiki/Time_to_live) 的設定。
* **NS 記錄 (域名伺服器) ** - 指定解析域名或子域名的 DNS 伺服器。
* **MX 記錄 (電子郵件交換伺服器) ** - 指定接收電子郵件的伺服器。
* **A 記錄 (地址) ** - 指向要對應的 IP 位置。
* **CNAME (別名) ** - 從一個域名指向另外一個域名,或是 `CNAME` (example.com 指向 www.example.com) 或指向一個 `A` 記錄。
* **NS 記錄 (域名伺服器)** - 指定解析域名或子域名的 DNS 伺服器。
* **MX 記錄 (電子郵件交換伺服器)** - 指定接收電子郵件的伺服器。
* **A 記錄 (地址)** - 指向要對應的 IP 位置。
* **CNAME (別名)** - 從一個域名指向另外一個域名,或是 `CNAME` (example.com 指向 www.example.com) 或指向一個 `A` 記錄。
[CloudFlare](https://www.cloudflare.com/dns/) 和 [Route 53](https://aws.amazon.com/route53/) 提供了 DNS 的服務。而這些 DNS 服務商透過以下幾種方式來決定流量如何被分派:
* [加權輪詢](http://g33kinfo.com/info/archives/2657)
* [加權輪詢](http://g33kinfo.com/info/archives/2657)
* 防止流量進入正在維修中的伺服器
* 在不同大小的集群中進行負載平衡
* A/B 測試
@ -560,11 +560,11 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 來源及延伸閱讀
* [DNS 架構](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
* [維基百科](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS 文章](https://support.dnsimple.com/categories/dns/)
* [DNS 架構](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
* [維基百科](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS 文章](https://support.dnsimple.com/categories/dns/)
## 內容傳遞網路(CDN)
## 內容傳遞網路(CDN)
<p align="center">
<img src="images/h9TAuGI.jpg">
@ -572,7 +572,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
<i><a href=https://www.creative-artworks.eu/why-use-a-content-delivery-network-cdn/>來源:為什麼要使用 CDN</a></i>
</p>
內容傳遞網路(CDN) 是一種全球性的分散式代理伺服器,它透過靠近使用者的伺服器來提供檔案。通常 HTML/CSS/JS、圖片或影片等檔案會靜態檔案會透過 CDN 來提供,儘管 Amazon 的 CloudFront 也支援了動態內容的 CDN 服務。而 CDN 的 DNS 服務會告知使用者要連接哪一台伺服器。
內容傳遞網路(CDN)是一種全球性的分散式代理伺服器,它透過靠近使用者的伺服器來提供檔案。通常 HTML/CSS/JS、圖片或影片等檔案會靜態檔案會透過 CDN 來提供,儘管 Amazon 的 CloudFront 也支援了動態內容的 CDN 服務。而 CDN 的 DNS 服務會告知使用者要連接哪一台伺服器。
透過 CDN 來取得檔案可以大幅度地增加請求的效率,因為:
@ -589,7 +589,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
拉取式的 CDN 指的是當地一個使用者來請求該資源時,才從伺服器上抓取對應檔案。將檔案留在伺服器上並且重寫指向 CDN 的 URL直到檔案被快取在 CDN 上為止,請求都會比較慢。
[存活時間 (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) 決定檔案要被緩存多久的時間。拉取式 CDN 可以節省儲存空間,但在過期的文件被更新之前,則會導致多餘的流量。
[存活時間 (TTL)](https://en.wikipedia.org/wiki/Time_to_live) 決定檔案要被緩存多久的時間。拉取式 CDN 可以節省儲存空間,但在過期的文件被更新之前,則會導致多餘的流量。
拉取式的 CDN 適合高流量的網站,因為檔案會被平均的分散在各個結點伺服器中。
@ -601,9 +601,9 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 來源及延伸閱讀
* [全球性的 CDN](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [拉取式和推拉式 CDN 的差別](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [維基百科](https://en.wikipedia.org/wiki/Content_delivery_network)
* [全球性的 CDN](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci)
* [拉取式和推拉式 CDN 的差別](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [維基百科](https://en.wikipedia.org/wiki/Content_delivery_network)
## 負載平衡器
@ -619,12 +619,12 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
* 避免資源過載
* 避免單點失敗
負載平衡器可以透過硬體(較昂貴) 或 HAProxy 等軟體來實現。
負載平衡器可以透過硬體(較昂貴)或 HAProxy 等軟體來實現。
其餘額外的好處有:
* **SSL 終結** - 將傳入的請求解密,並且加密伺服器的回應,如此一來後端伺服器就不需要進行這些高度消耗資源的運算
* 不需要在每一台機器上安裝 [X.509 憑證](https://en.wikipedia.org/wiki/X.509)
* 不需要在每一台機器上安裝 [X.509 憑證](https://en.wikipedia.org/wiki/X.509)。
* **Session 保存** - 發行 cookie並將特定使用者的請求路由到同樣的後端伺服器上。
為了避免故障,通常會採用 [主動到備用切換模式](#主動到備用切換模式(AP Mode)) 或 [雙主動切換模式](#雙主動切換模式(AA Mode)) 這樣多個負載平衡器的模式。
@ -634,13 +634,13 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
* 隨機
* 最少負載
* Session/cookies
* [輪詢調度或加權輪詢調度](http://g33kinfo.com/info/archives/2657)
* [第四層負載平衡](#第四層負載平衡)
* [第七層負載平衡](#第七層負載平衡)
* [輪詢調度或加權輪詢調度](http://g33kinfo.com/info/archives/2657)
* [第四層負載平衡](#第四層負載平衡)
* [第七層負載平衡](#第七層負載平衡)
### 第四層負載平衡
第四層的負載平衡器會監看 [傳輸層](#傳輸層) 的資訊來決定如何分發請求。一般來說,這包含了來源、目標 IP 位置,以及在 header 中的 port但不包含資料本身的內容。第四層的負載平衡器會透過 [網路地址轉換(NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) 來向上游的伺服器轉發資料。
第四層的負載平衡器會監看 [傳輸層](#傳輸層) 的資訊來決定如何分發請求。一般來說,這包含了來源、目標 IP 位置,以及在 header 中的 port但不包含資料本身的內容。第四層的負載平衡器會透過 [網路地址轉換(NAT)](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) 來向上游的伺服器轉發資料。
### 第七層負載平衡
@ -656,7 +656,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
* 水平擴展會增加複雜性,同時也涉及了多台伺服器的議題
* 伺服器應該是無狀態的:不應該包括像是 session 或資料圖片等和使用者相關的內容
* Session 可以集中儲存在資料庫或 [快取](#快取) (Redis、Memcached) 等資料儲存中。
* Session 可以集中儲存在資料庫或 [快取](#快取)(Redis、Memcached) 等資料儲存中。
* 快取伺服器或資料庫需要隨著伺服器的增加而進行擴展,以便處理更多的請求。
### 負載平衡器的缺點
@ -667,15 +667,15 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 來源及延伸閱讀
* [NGINX 架構](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架構指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [可擴展性](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [NGINX 架構](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架構指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [可擴展性](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [維基百科](https://en.wikipedia.org/wiki/Load_balancing_(computing))
* [第四層負載平衡](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [第七層負載平衡](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB 監聽器設定](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
* [第四層負載平衡](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [第七層負載平衡](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB 監聽器設定](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
## 反向代理(網頁伺服器)
## 反向代理(網頁伺服器)
<p align="center">
<img src="images/n41Azff.png">
@ -691,7 +691,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
* **增加安全性** - 隱藏後端伺服器的資訊、可以設定 IP 的黑名單、限制每個客戶端的連線數量等。
* **增加可擴展性與靈活性** - 客戶端只會看到反向代理伺服器的 IP 或域名,這樣你就可以增加背後伺服器的數量或設定而不影響客戶端。
* **SSL 終止** - 解密傳入的請求、加密伺服器的回應,這樣後端伺服器就不需要進行這些高成本的操作
* 不需要在每一台伺服器安裝 [X.509 憑證](https://en.wikipedia.org/wiki/X.509)
* 不需要在每一台伺服器安裝 [X.509 憑證](https://en.wikipedia.org/wiki/X.509)。
* **壓縮** - 壓縮伺服器的回應
* **快取** - 直接在代理伺服器回應命中快取的結果
* **靜態檔案** - 直接提供靜態內容
@ -709,14 +709,14 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 反向代理伺服器的缺點
* 引入反向代理伺服器會增加系統複雜度。
* 只有一台反向代理伺服器會有單點失效的問題,而設定多台的反向代理伺服器(如 [故障轉移](https://en.wikipedia.org/wiki/Failover) ) 同樣會增加系統複雜度。
* 只有一台反向代理伺服器會有單點失效的問題,而設定多台的反向代理伺服器(如 [故障轉移](https://en.wikipedia.org/wiki/Failover) )同樣會增加系統複雜度。
### 來源與延伸閱讀
* [反向代理伺服器與負載平衡](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX 架構](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架構指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [維基百科](https://en.wikipedia.org/wiki/Reverse_proxy)
* [反向代理伺服器與負載平衡](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX 架構](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy 架構指南](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [維基百科](https://en.wikipedia.org/wiki/Reverse_proxy)
## 應用層
@ -726,11 +726,11 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
<i><a href=http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer>資料來源:可縮放式系統架構介紹</a></i>
</p>
將 Web 服務層與應用層(也被稱為平台層) 分離,如此一來這兩層就可以獨立縮放與設定,增加新的 API 服務只需要增加應用伺服器,而不需要增加額外的 Web 伺服器。
將 Web 服務層與應用層(也被稱為平台層)分離,如此一來這兩層就可以獨立縮放與設定,增加新的 API 服務只需要增加應用伺服器,而不需要增加額外的 Web 伺服器。
**單一職責原則**鼓勵小型、自治的服務與共同合作,小型團隊透過提供小型的服務可以更有效率地讓計畫成長。
在應用層中的工作程式可以實作 [非同步機制](#非同步機制)
在應用層中的工作程式可以實作 [非同步機制](#非同步機制)
### 微服務
@ -740,7 +740,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 服務發現
[Consul](https://www.consul.io/docs/index.html) 、[Etcd](https://coreos.com/etcd/docs/latest) , 或是 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) 等系統可以透過註冊的名稱、位置、Port 等資訊來幫助各個服務發現彼此。[Health checks](https://www.consul.io/intro/getting-started/checks.html) 可以幫助確認服務的完整性以及是否經常使用一個 [HTTP](#hypertext-transfer-protocol-http) 的路徑。[鍵-值對的資料庫](#鍵-值對的資料庫) 則用來儲存設定的資訊與其他共享的資料。
[Consul](https://www.consul.io/docs/index.html)、[Etcd](https://coreos.com/etcd/docs/latest), 或是 [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) 等系統可以透過註冊的名稱、位置、Port 等資訊來幫助各個服務發現彼此。[Health checks](https://www.consul.io/intro/getting-started/checks.html) 可以幫助確認服務的完整性以及是否經常使用一個 [HTTP](#hypertext-transfer-protocol-http) 的路徑。[鍵-值對的資料庫](#鍵-值對的資料庫) 則用來儲存設定的資訊與其他共享的資料。
### 應用層的缺點
@ -749,11 +749,11 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
### 來源與延伸閱讀
* [可擴展式系統架構介紹](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [面向服務架構](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeper 介紹](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [建構微服務系統你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
* [可擴展式系統架構介紹](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [面向服務架構](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Zookeeper 介紹](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [建構微服務系統你所需要知道的一切](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## 資料庫
@ -763,7 +763,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
<i><a href=https://www.youtube.com/watch?v=vg5onp8TU6Q>來源:擴展你的使用者數量到第一個一千萬量級</a></i>
</p>
### 關連式資料庫管理系統(RDBMS)
### 關連式資料庫管理系統(RDBMS)
像 SQL 這種關連式資料庫是以一組表格的形式存在的資料集合。
@ -804,7 +804,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
##### 主動模式的缺點
* 你需要一個負載平衡器來或是在你的應用程式邏輯中做修改來決定要寫入哪個資料庫。
* 大多數的主動模式資料庫無法保證一致性(違反 ACID) ,或是會因為同步而產生了寫入延遲。
* 大多數的主動模式資料庫無法保證一致性(違反 ACID),或是會因為同步而產生了寫入延遲。
* 隨著更多寫入節點的增加和延遲的提高,如何解決衝突就顯得更加重要。
* 參考 [複寫的缺點](#複寫的缺點) 章節,你可以看到主動模式複寫與主從模式**共同**的缺點。
@ -818,8 +818,8 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
##### 來源及延伸閱讀
* [可擴展性、可用性、穩定性及其模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [多主要資料庫複寫](https://en.wikipedia.org/wiki/Multi-master_replication)
* [可擴展性、可用性、穩定性及其模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [多主要資料庫複寫](https://en.wikipedia.org/wiki/Multi-master_replication)
#### 聯邦式資料庫
@ -829,7 +829,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
<i><a href=https://www.youtube.com/watch?v=vg5onp8TU6Q>來源:擴展你的使用者數量到第一個一千萬量級</a></i>
</p>
聯邦式資料庫(或是指功能式切分) 是將資料庫按照對應的功能進行分割。例如:你可以三個資料庫,分別是:**論壇**、**使用者**和**產品**,而不僅僅是單一資料庫。這樣會減少每個資料庫寫入與讀取的流量,進而降低複製的延遲。較少的資料意味者更多適合放入記憶體中的資料,進而增加快取命中率。因為沒有循序寫入的中央式主資料庫,你可以並行寫入以增加吞吐量。
聯邦式資料庫(或是指功能式切分)是將資料庫按照對應的功能進行分割。例如:你可以三個資料庫,分別是:**論壇**、**使用者**和**產品**,而不僅僅是單一資料庫。這樣會減少每個資料庫寫入與讀取的流量,進而降低複製的延遲。較少的資料意味者更多適合放入記憶體中的資料,進而增加快取命中率。因為沒有循序寫入的中央式主資料庫,你可以並行寫入以增加吞吐量。
##### 聯邦式資料庫的缺點
@ -840,7 +840,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
##### 來源及延伸閱讀
* [來源:擴展你的使用者數量到第一個一千萬量級](https://www.youtube.com/watch?v=vg5onp8TU6Q)
* [來源:擴展你的使用者數量到第一個一千萬量級](https://www.youtube.com/watch?v=vg5onp8TU6Q)
#### 分片
@ -866,9 +866,9 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
##### 來源及延伸閱讀
* [分片時代來臨](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [分片時代來臨](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [分片資料庫架構](https://en.wikipedia.org/wiki/Shard_(database_architecture))
* [一致性 hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
* [一致性 hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### 反正規化
@ -888,7 +888,7 @@ DNS 是階層式的架構,一部分的 DNS 伺服器位於頂層,當查詢
###### 來源及延伸閱讀
* [反正規化](https://en.wikipedia.org/wiki/Denormalization)
* [反正規化](https://en.wikipedia.org/wiki/Denormalization)
#### SQL 優化
@ -910,8 +910,8 @@ SQL 優化是一個涵蓋範圍很廣的主題,有許多相關的 [參考書
* 使用 `INT` 來儲存數量級達到 2^32 或 40 億等較大的數字。
* 使用 `DECIMAL` 來儲存貨幣資料可以避免浮點數表達錯誤。
* 避免儲存龐大的 `BLOBS`,取而代之的,應該儲存存放該對象的位置。
* `VARCHAR(255) ` 是使用 8 位數來儲存時的最大表示法,在某些關連式資料庫中,要最大限度地使用它。
* 在適用的情況下設定 `NOT NULL` 來 [提高搜尋性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* `VARCHAR(255)` 是使用 8 位數來儲存時的最大表示法,在某些關連式資料庫中,要最大限度地使用它。
* 在適用的情況下設定 `NOT NULL` 來 [提高搜尋性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)。
##### 使用正確的索引
@ -923,7 +923,7 @@ SQL 優化是一個涵蓋範圍很廣的主題,有許多相關的 [參考書
##### 避免高成本的 Join 操作
* 有性能需求時,可以進行 [反正規化](#反正規化)
* 有性能需求時,可以進行 [反正規化](#反正規化)。
##### 分割資料表
@ -931,18 +931,18 @@ SQL 優化是一個涵蓋範圍很廣的主題,有許多相關的 [參考書
##### 調整查詢的快取
* 在某些情況下,[查詢快取](http://dev.mysql.com/doc/refman/5.7/en/query-cache) 可能會導致 [性能問題](https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/)
* 在某些情況下,[查詢快取](http://dev.mysql.com/doc/refman/5.7/en/query-cache) 可能會導致 [性能問題](https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/)。
##### 來源及延伸閱讀
* [MySQL 查詢優化小提示](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
* [為什麼使用 VARCHAR(255) 很常見](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [Null 值是如何影響資料庫性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [慢 SQL log 查詢](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
* [MySQL 查詢優化小提示](http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck)
* [為什麼使用 VARCHAR(255) 很常見](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [Null 值是如何影響資料庫性能](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [慢 SQL log 查詢](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
NoSQL 指的是 **鍵-值對的資料庫**、**文件類型資料庫**、**列儲存型資料庫** 和 **圖形資料庫** 等的統稱。資料是非正規化的Join 大部分在應用端完成。大多數的 NoSQL 資料庫無法真正實現 ACID 的 transaction他們通常會支援 [最終一致性](#最終一致性)
NoSQL 指的是 **鍵-值對的資料庫**、**文件類型資料庫**、**列儲存型資料庫** 和 **圖形資料庫** 等的統稱。資料是非正規化的Join 大部分在應用端完成。大多數的 NoSQL 資料庫無法真正實現 ACID 的 transaction他們通常會支援 [最終一致性](#最終一致性)。
**BASE** 通常被用來描述 NoSQL 資料庫的特性。 跟 [CAP 理論](#cap 理論) 相比BASE 強調可用性而非一致性。
@ -964,10 +964,10 @@ NoSQL 指的是 **鍵-值對的資料庫**、**文件類型資料庫**、**列
##### 來源及延伸閱讀
* [鍵值對資料庫](https://en.wikipedia.org/wiki/Key-value_database)
* [鍵值對資料庫的缺點](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redis 架構](http://qnimate.com/overview-of-redis-architecture/)
* [Memcached 架構](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
* [鍵值對資料庫](https://en.wikipedia.org/wiki/Key-value_database)
* [鍵值對資料庫的缺點](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redis 架構](http://qnimate.com/overview-of-redis-architecture/)
* [Memcached 架構](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### 文件類型資料庫
@ -977,16 +977,16 @@ NoSQL 指的是 **鍵-值對的資料庫**、**文件類型資料庫**、**列
根據底層實作的不同文件資料庫可以根據集合、標籤、metadata 或目錄等來組織而成。儘管不同的文件可以被組織在一起或是分成一組,但彼此之間可能具有完全不同的內容。
某些文件型資料庫,例如 [MongoDB](https://www.mongodb.com/mongodb-architecture) 和 [CouchDB](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/) 同樣提供了類似於 SQL 查詢語句的功能來實現複雜的查詢。[DynamoDB](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) 則同時支援了鍵值對儲存和文件類型儲存的功能。
某些文件型資料庫,例如 [MongoDB](https://www.mongodb.com/mongodb-architecture) 和 [CouchDB](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/) 同樣提供了類似於 SQL 查詢語句的功能來實現複雜的查詢。[DynamoDB](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf)則同時支援了鍵值對儲存和文件類型儲存的功能。
文件類型的資料庫具備高度靈活性,通常用於處理偶爾變化的資料。
##### 延伸閱讀
* [文件類型的資料庫](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB 架構](https://www.mongodb.com/mongodb-architecture)
* [CouchDB 架構](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch 架構](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
* [文件類型的資料庫](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB 架構](https://www.mongodb.com/mongodb-architecture)
* [CouchDB 架構](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch 架構](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### 列儲存型資料庫
@ -998,18 +998,18 @@ NoSQL 指的是 **鍵-值對的資料庫**、**文件類型資料庫**、**列
> 抽象模型: 巢狀的 Map `ColumnFamily<RowKey, Columns<ColKey, Value, Timestamp>>`
列儲存型資料庫的基本單元是一列 (名稱/值為一組) 。每一列可以被分到一個列的族群中(類似於 SQL 中的資料表) 。而每個列族群之上還可以有一個超級列群。你可以透過列的鍵值來存取每一列,每個值都有一個時間戳記來解決版本問題。
列儲存型資料庫的基本單元是一列 (名稱/值為一組)。每一列可以被分到一個列的族群中(類似於 SQL 中的資料表)。而每個列族群之上還可以有一個超級列群。你可以透過列的鍵值來存取每一列,每個值都有一個時間戳記來解決版本問題。
Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) ,這影響了用於 Hadoop 系統中開源的 [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) often-used in the Hadoop ecosystem, 和 Facebook 的 [Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html) 。這些資料庫的儲存系統把鍵值利用字母順序來儲存,可以有效率的來讀取。
Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf),這影響了用於 Hadoop 系統中開源的 [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) often-used in the Hadoop ecosystem, 和 Facebook 的 [Cassandra](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)。這些資料庫的儲存系統把鍵值利用字母順序來儲存,可以有效率的來讀取。
列儲存型態的資料的提供了高可用和高擴展性,通常被用在大量資料的儲存上。
##### 來源及延伸閱讀
* [SQL 和 NoSQL 歷史簡介](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable 架構](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase 架構](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
* [Cassandra 架構](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
* [SQL 和 NoSQL 歷史簡介](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable 架構](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase 架構](https://www.mapr.com/blog/in-depth-look-hbase-architecture)
* [Cassandra 架構](http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureIntro_c.html)
#### 圖形資料庫
@ -1021,23 +1021,23 @@ Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.h
> 抽象模型:圖
在圖形資料庫中,每一個節點會對應一條紀錄,而每個邊描述兩個節點之間的關係。圖形資料庫針對表示外來鍵(Foreign Key) 眾多的複雜關聯或多對多關聯進行優化。
在圖形資料庫中,每一個節點會對應一條紀錄,而每個邊描述兩個節點之間的關係。圖形資料庫針對表示外來鍵(Foreign Key)眾多的複雜關聯或多對多關聯進行優化。
圖形資料庫為了儲存複雜的資料結構,例如社群網路,提供了很高的性能。他們相對較新,尚未被廣泛使用,查詢工具或資源比較難取得,許多這種類型的資料庫只能透過 [REST API](#representational-state-transfer-rest) 來存取。
##### 來源及延伸閱讀
* [圖形資料庫](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
* [圖形資料庫](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### 來源及延伸閱讀NoSQL
* [資料庫術語解釋](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQL 資料庫:調查與決策指南](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [可擴展性](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [NoSQL 介紹](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQL 模式](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
* [資料庫術語解釋](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQL 資料庫:調查與決策指南](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [可擴展性](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [NoSQL 介紹](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQL 模式](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQL 或 NoSQL
@ -1078,8 +1078,8 @@ Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.h
##### 來源及延伸閱讀: SQL 或 NoSQL
* [擴展你的使用者到第一個一千萬等級](https://www.youtube.com/watch?v=vg5onp8TU6Q)
* [SQL 和 NoSQL 的不同](https://www.sitepoint.com/sql-vs-nosql-differences/)
* [擴展你的使用者到第一個一千萬等級](https://www.youtube.com/watch?v=vg5onp8TU6Q)
* [SQL 和 NoSQL 的不同](https://www.sitepoint.com/sql-vs-nosql-differences/)
## 快取
@ -1095,11 +1095,11 @@ Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.h
### 客戶端快取
快取可以在客戶端(作業系統或瀏覽器) 、[伺服器端](#反向代理伺服器) 或不同的緩存層等。
快取可以在客戶端(作業系統或瀏覽器)、[伺服器端](#反向代理伺服器) 或不同的緩存層等。
### CDN 快取
[內容傳遞網路(CDN) ](#內容傳遞網路(CDN)) 也被視為一種快取。
[內容傳遞網路(CDN)](#內容傳遞網路(CDN)) 也被視為一種快取。
### 網站伺服器快取
@ -1111,7 +1111,7 @@ Google 發表了第一個列儲存型資料庫 [Bigtable](http://www.read.seas.h
### 應用程式快取
基於記憶體的快取,像是 Memcached 和 Redis 是一種在應用層和資料庫之間的鍵值對快取。由於資料保存在記憶體中,比起存放在硬碟中的資料庫在存取上要快得多。記憶體的限制也比硬碟更多,所以像是 [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 的 [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) 方法可以讓 '熱門資料' 放在記憶體中,而比較 '冷門' 的資料在記憶體中失效。
基於記憶體的快取,像是 Memcached 和 Redis 是一種在應用層和資料庫之間的鍵值對快取。由於資料保存在記憶體中,比起存放在硬碟中的資料庫在存取上要快得多。記憶體的限制也比硬碟更多,所以像是 [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 的 [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) 方法可以讓 '熱門資料' 放在記憶體中,而比較 '冷門' 的資料在記憶體中失效。
Redis 還有以下額外的功能:
@ -1168,12 +1168,12 @@ Redis 還有以下額外的功能:
* 將資料返回
```python
def get_user(self, user_id) :
user = cache.get("user.{0}", user_id)
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@ -1205,15 +1205,15 @@ def get_user(self, user_id) :
應用程式程式碼:
```
set_user(12345, {"foo":"bar"})
set_user(12345, {"foo":"bar"})
```
快取程式碼:
```python
def set_user(user_id, values) :
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
```
直寫模式因為寫入操作的緣故,是一種較慢的操作,但讀取剛剛寫入的資料會很快,使用者通常比較能接受更新較慢,但讀取快速的情況。在快取中的資料不會過時。
@ -1223,7 +1223,7 @@ def set_user(user_id, values) :
* 當發生故障或因為水平擴展而產生新的節點時,新的節點中將不會有快取資料,直到資料庫更新為止。將快取模式和寫入模式一起使用可以減緩這種現象。
* 被寫入多數的資料可能永遠都不會被讀取,你可以設定 TTL 來解決這種問題。
#### 事後寫入(回寫)
#### 事後寫入(回寫)
<p align="center">
<img src="images/rgSrvjG.png">
@ -1259,18 +1259,18 @@ def set_user(user_id, values) :
### 快取的缺點
* 需要保持快取和資料庫之間資料的一致性,比如說要如何設定 [快取無效](https://en.wikipedia.org/wiki/Cache_algorithms)
* 需要保持快取和資料庫之間資料的一致性,比如說要如何設定 [快取無效](https://en.wikipedia.org/wiki/Cache_algorithms)。
* 需要更改應用程式程式碼來支援像是 Redis 或 Memcached 等快取服務。
* 快取的無效性是個難題,而什麼時候要更新快取就是個對應的複雜問題。
### 來源及延伸閱讀
* [從快取到記憶體資料網格技術](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [可擴展的系統設計模式](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [可擴展的系統架構介紹](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [可擴展性、可用性、穩定性與模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [可擴展性](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCache 策略](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [從快取到記憶體資料網格技術](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [可擴展的系統設計模式](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [可擴展的系統架構介紹](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [可擴展性、可用性、穩定性與模式](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [可擴展性](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCache 策略](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [維基百科](https://en.wikipedia.org/wiki/Cache_(computing))
## 非同步機制
@ -1314,10 +1314,10 @@ def set_user(user_id, values) :
### 來源及延伸閱讀
* [這是一個數字遊戲](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [當過載時,使用背壓](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [利特爾法則](https://en.wikipedia.org/wiki/Little%27s_law)
* [訊息佇列和工作佇列有什麼不同?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
* [這是一個數字遊戲](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [當過載時,使用背壓](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [利特爾法則](https://en.wikipedia.org/wiki/Little%27s_law)
* [訊息佇列和工作佇列有什麼不同?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## 通訊
@ -1327,11 +1327,11 @@ def set_user(user_id, values) :
<i><a href=http://www.escotal.com/osilayer.html>來源OSI 七層模型</a></i>
</p>
### 超文件通訊協定 (HTTP)
### 超文件通訊協定 (HTTP)
HTTP 是一種在客戶端和伺服器端傳輸資料和定義編碼的方法。它是基於請求/回應的協議客戶端發出請求而伺服器端則針對請求內容完成對應的行為並進行回應。HTTP 是獨立的,它允許請求和回應經過許多負載平衡、快取、加密和壓縮的中間路由器和伺服器。
一個基本的 HTTP 請求是由一個動詞(方法) 和一個資源(端點) 所組成。以下是常見的 HTTP 動詞:
一個基本的 HTTP 請求是由一個動詞(方法)和一個資源(端點)所組成。以下是常見的 HTTP 動詞:
| 動詞 | 描述 | 冪等* | 安全性 | 可快取性 |
|--------|----------------------------------|-------|--------|-----------------------------------------|
@ -1347,11 +1347,11 @@ HTTP 是依賴於較底層的協議(例如:**TCP** 和 **UDP**) 的應用層
#### 來源及延伸閱讀
* [什麼是 HTTP?](https://www.nginx.com/resources/glossary/http/)
* [HTTP 和 TCP 的差別](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT 和 PATCH 的差別](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
* [什麼是 HTTP?](https://www.nginx.com/resources/glossary/http/)
* [HTTP 和 TCP 的差別](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [PUT 和 PATCH 的差別](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
### 傳輸控制通訊協定(TCP)
### 傳輸控制通訊協定(TCP)
<p align="center">
<img src="images/JdAsdvG.jpg">
@ -1361,10 +1361,10 @@ HTTP 是依賴於較底層的協議(例如:**TCP** 和 **UDP**) 的應用層
TCP 是透過 [IP 網路](https://en.wikipedia.org/wiki/Internet_Protocol) 面向連線的通訊協定。連線是透過 [握手](https://en.wikipedia.org/wiki/Handshaking) 的方式來建立和斷開連接,所有發送的資料在接收時會保證順序,另外透過以下的機制來保證資料不會損毀:
* 每個資料的序列號碼和 [校驗碼](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation)
* 每個資料的序列號碼和 [校驗碼](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation)
* [確認訊息](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) 和自動重傳
如果發送端沒有收到正確的回應會重新發送資料如果有多次的逾期時連線就會斷開。TCP 實作了 [流量控制](https://en.wikipedia.org/wiki/Flow_control_(data)) 和 [阻塞控制](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) ,這些機制會導致延遲,而且通常傳輸的效率會比 UDP 來得低。
如果發送端沒有收到正確的回應會重新發送資料如果有多次的逾期時連線就會斷開。TCP 實作了 [流量控制](https://en.wikipedia.org/wiki/Flow_control_(data)) 和 [阻塞控制](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control),這些機制會導致延遲,而且通常傳輸的效率會比 UDP 來得低。
為了確保高吞吐量Web 伺服器會保持大量的 TCP 連線,進而導致記憶體用量變大。在 Web 伺服器之間使用大量的開放連線可能是昂貴的,更別說是在 memcached 快取中做這些事情。[連線池](https://en.wikipedia.org/wiki/Connection_pool) 可以幫助在適合的情況下切換到 UDP。
@ -1375,7 +1375,7 @@ TCP 對於需要高可靠、低時間急迫性的應用來說很有用,比如
* 你需要資料完整無缺
* 你想要自動地對網路的流量進行最佳評估
### 使用者資料流通訊協定 (UDP)
### 使用者資料流通訊協定 (UDP)
<p align="center">
<img src="images/yzDrJtA.jpg">
@ -1383,7 +1383,7 @@ TCP 對於需要高可靠、低時間急迫性的應用來說很有用,比如
<i><a href=http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/>資料來源:如何製作多人遊戲</a></i>
</p>
UDP 是非連線型的通訊協定。資料流(類似於封包) 只在資料流級別進行確保。資料可能會不按照順序地到達目的地也可能會遺失。UDP 並不支援阻塞處理,儘管 UDP 不像 TCP 一樣可靠,但通常效率更好。
UDP 是非連線型的通訊協定。資料流(類似於封包)只在資料流級別進行確保。資料可能會不按照順序地到達目的地也可能會遺失。UDP 並不支援阻塞處理,儘管 UDP 不像 TCP 一樣可靠,但通常效率更好。
UDP 可以透過廣播來傳送資料流到所有子網路中的所有裝置,這對於 [DHCP](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol) 來說很有用,因為所有子網路中的設備還沒有分配到 IP 位置,而對 TCP 來說IP 是必須的。
@ -1397,14 +1397,14 @@ UDP 的可靠性較低,但適合用在像是網路電話、視訊聊天、串
#### 來源及延伸閱讀
* [遊戲程式撰寫的網路架構](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP 和 UDP 的關鍵區別](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP 和 UDP 的差別](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [傳輸控制協議(TCP) ](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [使用者資料流協議(UDP) ](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Memcache 在 Facebook 中的可擴展性設計](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
* [遊戲程式撰寫的網路架構](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [TCP 和 UDP 的關鍵區別](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [TCP 和 UDP 的差別](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [傳輸控制協議(TCP)](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [使用者資料流協議(UDP)](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Memcache 在 Facebook 中的可擴展性設計](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
### 遠端程式呼叫 (RPC)
### 遠端程式呼叫 (RPC)
<p align="center">
<img src="images/iF4Mkb5.png">
@ -1412,7 +1412,7 @@ UDP 的可靠性較低,但適合用在像是網路電話、視訊聊天、串
<i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>資料來源:破解系統設計面試</a></i>
</p>
在一個 RPC 中,客戶端會去呼叫另外一個位置空間(通常是在遠端的伺服器) 的方法。呼叫的方式就像是呼叫本地端的一個方法一樣,客戶端和伺服器溝通的具體過程被抽象化,而遠端呼叫相較於本地端呼叫來說一般較慢,而且可靠性較差,因此了解如何區別這兩種方法是必要的。熱門的 RPC 框架包含了 [Protobuf](https://developers.google.com/protocol-buffers/) 、[Thrift](https://thrift.apache.org/) 和 [Avro](https://avro.apache.org/docs/current/)
在一個 RPC 中,客戶端會去呼叫另外一個位置空間(通常是在遠端的伺服器)的方法。呼叫的方式就像是呼叫本地端的一個方法一樣,客戶端和伺服器溝通的具體過程被抽象化,而遠端呼叫相較於本地端呼叫來說一般較慢,而且可靠性較差,因此了解如何區別這兩種方法是必要的。熱門的 RPC 框架包含了 [Protobuf](https://developers.google.com/protocol-buffers/)、[Thrift](https://thrift.apache.org/) 和 [Avro](https://avro.apache.org/docs/current/)。
RPC 是一個請求-回應的通訊協定:
@ -1451,18 +1451,18 @@ RPC 專注於揭露行為,它通常用來處理內部通訊的效能問題,
* RPC 的客戶端會變得和伺服器的實作綁得更死
* 一個新的 API 必須在每個操作或使用案例中進行定義
* RPC 很難抓錯誤
* 你很難方便的修改現有的技術,舉例來說,如果你希望在 [Squid](http://www.squid-cache.org/) 這樣的快取伺服器上確保 [RPC 呼叫被正確的快取](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) ,你可以需要多費額外的努力了。
* 你很難方便的修改現有的技術,舉例來說,如果你希望在 [Squid](http://www.squid-cache.org/) 這樣的快取伺服器上確保 [RPC 呼叫被正確的快取](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/),你可以需要多費額外的努力了。
### 具象狀態轉移 (REST)
### 具象狀態轉移 (REST)
REST 是一個規範客戶端/伺服器端架構設計的模型。客戶端基於伺服器管理的系列操作,伺服器提供修改或取得資源的介面,所有的通訊必須是無狀態、可快取的。
Restful 的設計有四個原則:
* **標示資源 (HTTP 中的 URI) ** - 無論任何操作都使用相同的 URI
* **表示層的改變 (HTTP 中的動作) ** - 使用 HTTP 動詞、Headers 和 body
* **可自我描述的錯誤訊息 (HTTP 中的狀態碼) ** - 使用狀態碼,不要重複造輪子
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTTP 中的 HTML 介面) ** - 你的 Web 伺服器應該要能夠透過瀏覽器訪問
* **標示資源 (HTTP 中的 URI)** - 無論任何操作都使用相同的 URI
* **表示層的改變 (HTTP 中的動作)** - 使用 HTTP 動詞、Headers 和 body
* **可自我描述的錯誤訊息 (HTTP 中的狀態碼)** - 使用狀態碼,不要重複造輪子
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTTP 中的 HTML 介面)** - 你的 Web 伺服器應該要能夠透過瀏覽器訪問
REST 請求範例:
@ -1473,12 +1473,12 @@ PUT /someresources/anId
{"anotherdata": "another value"}
```
REST 關注於揭露資料,減少客戶端/伺服器之間耦合的程度,並且經常用在公共的 HTTP API 設計上。REST 使用更通用和受規範的方法來透過 URI 來揭露資源,[透過 Headers 來描述](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) ,並透過 GET、POST、PUT、DELETE 和 PATCH 等動作來進行操作因為無狀態的特性REST 易於橫向擴展和分片。
REST 關注於揭露資料,減少客戶端/伺服器之間耦合的程度,並且經常用在公共的 HTTP API 設計上。REST 使用更通用和受規範的方法來透過 URI 來揭露資源,[透過 Headers 來描述](https://github.com/for-GET/know-your-http-well/blob/master/headers.md),並透過 GET、POST、PUT、DELETE 和 PATCH 等動作來進行操作因為無狀態的特性REST 易於橫向擴展和分片。
#### REST 的缺點
* 因為 REST 的重點是放在如何揭露資料所以當資料不是以自然的形式組成時或是結構相當複雜時REST 可能無法很好的處理他們。舉個範例,回傳過去一小時中與特定事件吻合的更新操作就很難透過路徑來表示,使用 REST可能會使用 URI、查詢參數和請求本身來實現。
* REST 一般依賴於幾個動詞操作(GET、POST、PUT、DELETE 和 PATCH) ,但有時候這些操作無法滿足你的需求,舉個範例,將過期的文件移動到歸檔文件資料庫中這樣的操作,可能就沒辦法簡單的使用以上幾個動詞操作來完成。
* REST 一般依賴於幾個動詞操作(GET、POST、PUT、DELETE 和 PATCH),但有時候這些操作無法滿足你的需求,舉個範例,將過期的文件移動到歸檔文件資料庫中這樣的操作,可能就沒辦法簡單的使用以上幾個動詞操作來完成。
* 對於那些多層複雜的資源來說,需要在客戶端和伺服器端進行多次請求,例如:獲得部落格頁面及相關評論,而對於網路環境較不穩定的行動端應用來說,這些多次往返的請求是非常麻煩的。
* 隨著時間的增加API 的回應中可能會增加更多的欄位,比較舊的客戶端還是會收到所有新的回應內容,即時他們不需要這些回應,這會造成他們的負擔,並且造成更大的延遲。
@ -1500,30 +1500,30 @@ REST 關注於揭露資料,減少客戶端/伺服器之間耦合的程度,
#### 來源及延伸閱讀
* [你真的知道為什麼你更喜歡 REST 而不是 RPC 嗎?](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [什麼時候 RPC 比 REST 更適合](http://programmers.stackexchange.com/a/181186)
* [REST 和 JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭開 RPC 和 REST 的神秘面紗](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺點](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [為什麼在內部要使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
* [你真的知道為什麼你更喜歡 REST 而不是 RPC 嗎?](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [什麼時候 RPC 比 REST 更適合](http://programmers.stackexchange.com/a/181186)
* [REST 和 JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [揭開 RPC 和 REST 的神秘面紗](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [使用 REST 的缺點](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [破解系統設計面試](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [為什麼在內部要使用 REST 而不是 RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## 資訊安全
這一章節需要更多的貢獻,一起[加入](#如何貢獻) 吧!
這一章節需要更多的貢獻,一起[加入](#如何貢獻)吧!
資訊安全是一個廣泛的議題,除非你有相當的經驗、資訊安全的背景或正在申請相關的職位要求對應的知識,否則了解以下的基礎內容即可:
*在傳輸和等待的過程中進行加密
* 對所有使用者輸入和從使用者得到的參數進行處理,以避免 [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) 和 [SQL injection](https://en.wikipedia.org/wiki/SQL_injection)
* 對所有使用者輸入和從使用者得到的參數進行處理,以避免 [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) 和 [SQL injection](https://en.wikipedia.org/wiki/SQL_injection)
* 使用參數化輸入來避免 SQL injection
* 使用 [最小權限原則](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
* 使用 [最小權限原則](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
### 來源及延伸閱讀
* [為開發者準備的資訊安全指南](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
* [為開發者準備的資訊安全指南](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## 附錄
@ -1546,7 +1546,7 @@ REST 關注於揭露資料,減少客戶端/伺服器之間耦合的程度,
#### 來源及延伸閱讀
* [2 的次方](https://en.wikipedia.org/wiki/Power_of_two)
* [2 的次方](https://en.wikipedia.org/wiki/Power_of_two)
### 每個開發者都應該知道的延遲數量級
@ -1587,14 +1587,14 @@ Notes
#### 視覺化延遲數
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
#### 來源及延伸閱讀
* [每個程式設計師都應該知道的延遲數量級 - 1](https://gist.github.com/jboner/2841832)
* [每個程式設計師都應該知道的延遲數量級 - 2](https://gist.github.com/hellerbarde/2843375)
* [關於建置大型分散式系統所需要知道的設計方案、課程和建議](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [從軟體工程師的角度來看建置大型分散式系統](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
* [每個程式設計師都應該知道的延遲數量級 - 1](https://gist.github.com/jboner/2841832)
* [每個程式設計師都應該知道的延遲數量級 - 2](https://gist.github.com/hellerbarde/2843375)
* [關於建置大型分散式系統所需要知道的設計方案、課程和建議](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [從軟體工程師的角度來看建置大型分散式系統](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
### 其他的系統設計面試問題
@ -1602,28 +1602,28 @@ Notes
| 問題 | 來源 |
|----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 設計一個類似於 Dropbox 的文件同步系統 | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| 設計一個類似於 Google 的搜尋引擎 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) <br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) <br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) <br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| 設計一個像 Google 一樣可擴展的網路爬蟲 | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| 設計一個 Google Docs | [code.google.com](https://code.google.com/p/google-mobwrite/) <br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| 設計一個像 Redis 一樣的鍵值對系統 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| 設計一個像 Memcached 的快取系統 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| 設計一個像 Amazon 一樣的推薦系統 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) [ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| 設計一個像 Bitly 一樣的短網址服務 | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| 設計一個像 WhatsApp 一樣的即時訊息系統 | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| 設計一個像 Instagram 一樣的相片服務 | [highscalability.com](http://highscalability.com/flickr-architecture) <br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| 設計一個像 Facebook 的新聞推薦方法 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) <br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) <br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| 設計一個 Facebook 時間軸功能 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) <br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| 設計 Facebook 的聊天功能 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) <br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| 設計一個像 Facebook 的圖形化搜尋系統 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| 設計一個像 CloudFlare 的內容傳輸網路 | [cmu.edu](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci) |
| 設計一個像 Twitter 的微網誌服務 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) <br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| 設計一個隨機 ID 生成系統 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) <br/>[github.com](https://github.com/twitter/snowflake/) |
| 給定一段時間,回傳次數排名前 K 的請求 | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf) <br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| 設計一個資料來源在多個資料中心的系統 | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| 設計一個線上多人卡牌遊戲 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) <br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| 設計一個垃圾回收系統 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) <br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| 貢獻更多系統設計問題 | [Contribute](#如何貢獻) |
| 設計一個類似於 Dropbox 的文件同步系統 | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| 設計一個類似於 Google 的搜尋引擎 | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407)<br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search)<br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/)<br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| 設計一個像 Google 一樣可擴展的網路爬蟲 | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| 設計一個 Google Docs | [code.google.com](https://code.google.com/p/google-mobwrite/)<br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| 設計一個像 Redis 一樣的鍵值對系統 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| 設計一個像 Memcached 的快取系統 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| 設計一個像 Amazon 一樣的推薦系統 | [hulu.com](http://tech.hulu.com/blog/2011/09/19/recommendation-system.html)[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| 設計一個像 Bitly 一樣的短網址服務 | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| 設計一個像 WhatsApp 一樣的即時訊息系統 | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| 設計一個像 Instagram 一樣的相片服務 | [highscalability.com](http://highscalability.com/flickr-architecture)<br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| 設計一個像 Facebook 的新聞推薦方法 | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed)<br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed)<br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| 設計一個 Facebook 時間軸功能 | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920)<br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| 設計 Facebook 的聊天功能 | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf)<br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| 設計一個像 Facebook 的圖形化搜尋系統 | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| 設計一個像 CloudFlare 的內容傳輸網路 | [cmu.edu](http://repository.cmu.edu/cgi/viewcontent.cgi?article=2112&context=compsci) |
| 設計一個像 Twitter 的微網誌服務 | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/)<br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| 設計一個隨機 ID 生成系統 | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake)<br/>[github.com](https://github.com/twitter/snowflake/) |
| 給定一段時間,回傳次數排名前 K 的請求 | [ucsb.edu](https://icmi.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf)<br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| 設計一個資料來源在多個資料中心的系統 | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| 設計一個線上多人卡牌遊戲 | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html)<br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| 設計一個垃圾回收系統 | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/)<br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| 貢獻更多系統設計問題 | [Contribute](#如何貢獻) |
### 真實世界的架構
@ -1643,54 +1643,54 @@ Notes
| 種類 | 系統 | 參考來源 |
|----------|----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| 資料處理 | **MapReduce** - Google 的分散式資料處理 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/mapreduce-osdi04.pdf) |
| 資料處理 | **Spark** - Databricks 的分散式資料處理 | [slideshare.net](http://www.slideshare.net/AGrishchenko/apache-spark-architecture) |
| 資料處理 | **Storm** - Twitter 的分散式資料處理 | [slideshare.net](http://www.slideshare.net/previa/storm-16094009) |
| 資料處理 | **MapReduce** - Google 的分散式資料處理 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/mapreduce-osdi04.pdf) |
| 資料處理 | **Spark** - Databricks 的分散式資料處理 | [slideshare.net](http://www.slideshare.net/AGrishchenko/apache-spark-architecture) |
| 資料處理 | **Storm** - Twitter 的分散式資料處理 | [slideshare.net](http://www.slideshare.net/previa/storm-16094009) |
| | | |
| 資料儲存 | **Bigtable** - Google 的列式資料庫 | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| 資料儲存 | **HBase** - Bigtable 的開放原始碼解決方案 | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
| 資料儲存 | **Cassandra** - Facebook 的列式資料庫 | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666) |
| 資料儲存 | **DynamoDB** - Amazon 的文件式資料庫 | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| 資料儲存 | **MongoDB** - 文件式資料庫 | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| 資料儲存 | **Spanner** - Google 的全球分散式資料庫 | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| 資料儲存 | **Memcached** - 分散式的記憶體快取系統 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| 資料儲存 | **Redis** - 具有持久化及值型別的分散式快取系統 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| 資料儲存 | **Bigtable** - Google 的列式資料庫 | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| 資料儲存 | **HBase** - Bigtable 的開放原始碼解決方案 | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
| 資料儲存 | **Cassandra** - Facebook 的列式資料庫 | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666) |
| 資料儲存 | **DynamoDB** - Amazon 的文件式資料庫 | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| 資料儲存 | **MongoDB** - 文件式資料庫 | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| 資料儲存 | **Spanner** - Google 的全球分散式資料庫 | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| 資料儲存 | **Memcached** - 分散式的記憶體快取系統 | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| 資料儲存 | **Redis** - 具有持久化及值型別的分散式快取系統 | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
| 檔案系統 | **Google File System (GFS) ** - 分散式的檔案系統 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| 檔案系統 | **Hadoop File System (HDFS) ** - GFS 的開放原始碼解決方案 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| 檔案系統 | **Google File System (GFS)** - 分散式的檔案系統 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| 檔案系統 | **Hadoop File System (HDFS)** - GFS 的開放原始碼解決方案 | [apache.org](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) |
| | | |
| 其他 | **Chubby** - Google 的分散式系統低耦合鎖服務 | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
| 其他 | **Dapper** - 分散式系統監控基礎設施 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf) |
| 其他 | **Kafka** - LinkedIn 的 pub/sub 訊息佇列服務 | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| 其他 | **Zookeeper** - 集中式的基礎架構和協調服務 | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | 貢獻更多架構 | [Contribute](#如何貢獻) |
| 其他 | **Dapper** - 分散式系統監控基礎設施 | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf) |
| 其他 | **Kafka** - LinkedIn 的 pub/sub 訊息佇列服務 | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| 其他 | **Zookeeper** - 集中式的基礎架構和協調服務 | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | 貢獻更多架構 | [Contribute](#如何貢獻) |
### 公司的系統架構
| 公司 | 參考 |
|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Amazon | [Amazon 的架構](http://highscalability.com/amazon-architecture) |
| Cinchcast | [每天產生 1,500 小時的音樂](http://highscalability.com/blog/2012/7/16/cinchcast-architecture-producing-1500-hours-of-audio-every-d.html) |
| DataSift | [每秒探勘 120,000 則 tweet](http://highscalability.com/blog/2011/11/29/datasift-architecture-realtime-datamining-at-120000-tweets-p.html) |
| DropBox | [我們如何擴展 Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [每秒操作 100,000 次 duh nuh nuhs](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google 的架構](http://highscalability.com/google-architecture) |
| Instagram | [一千四百萬個使用者TB 等級的照片儲存](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) <br/>[什麼驅動著 Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv 的即時影片廣播架構](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Facebook 可擴展的 memcached 架構](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) <br/>[TAO: Facebook 為了社交網路架構的分散式資料儲存](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) <br/>[Facebook 的圖片儲存架構](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Flickr | [Flickr 的架構](http://highscalability.com/flickr-architecture) |
| Mailbox | [在六週內從 0 到 100 萬個使用者](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Pinterest | [從零到每個月數十億次的瀏覽量](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) <br/>[1800 萬個訪問人次、10 倍成長、12 名員工](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [月使用者量 5000 萬人次在成長](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish 的架構](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [如何處理每天 13 億筆交易](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow 的架構](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [4000 萬的訪問人次、2 億次頁面瀏覽量、30 TB 的資料](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [每月 150 億的瀏覽量](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [如何讓 Twitter 的速度成長 10000 倍](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) <br/>[使用 MySQL 儲存每天 2.5 億條 tweet](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) <br/>[1.5 億的活躍使用者、300K QPS、22 MB/S 的串流資料](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) <br/>[可擴展的 Timelines](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) <br/>[Twitter 的大大小小的資料](https://www.youtube.com/watch?v=5cKTP36HVgI) <br/>[Twitter 的運營:擴展超過一億個使用者](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Uber | [Uber 是如何擴展他們的及時行銷平台](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) |
| WhatsApp | [讓 Facebook 用 $190 億購買下來的 WhatsApp 的架構](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| YouTube | [YouTube 的可擴展性](https://www.youtube.com/watch?v=w5WVu624fY8) [YouTube 的架構](http://highscalability.com/youtube-architecture) |
| Amazon | [Amazon 的架構](http://highscalability.com/amazon-architecture) |
| Cinchcast | [每天產生 1,500 小時的音樂](http://highscalability.com/blog/2012/7/16/cinchcast-architecture-producing-1500-hours-of-audio-every-d.html) |
| DataSift | [每秒探勘 120,000 則 tweet](http://highscalability.com/blog/2011/11/29/datasift-architecture-realtime-datamining-at-120000-tweets-p.html) |
| DropBox | [我們如何擴展 Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [每秒操作 100,000 次 duh nuh nuhs](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google 的架構](http://highscalability.com/google-architecture) |
| Instagram | [一千四百萬個使用者TB 等級的照片儲存](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html)<br/>[什麼驅動著 Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv 的即時影片廣播架構](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Facebook 可擴展的 memcached 架構](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf)<br/>[TAO: Facebook 為了社交網路架構的分散式資料儲存](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf)<br/>[Facebook 的圖片儲存架構](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) |
| Flickr | [Flickr 的架構](http://highscalability.com/flickr-architecture) |
| Mailbox | [在六週內從 0 到 100 萬個使用者](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Pinterest | [從零到每個月數十億次的瀏覽量](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html)<br/>[1800 萬個訪問人次、10 倍成長、12 名員工](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [月使用者量 5000 萬人次在成長](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish 的架構](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [如何處理每天 13 億筆交易](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow 的架構](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [4000 萬的訪問人次、2 億次頁面瀏覽量、30 TB 的資料](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [每月 150 億的瀏覽量](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [如何讓 Twitter 的速度成長 10000 倍](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster)<br/>[使用 MySQL 儲存每天 2.5 億條 tweet](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html)<br/>[1.5 億的活躍使用者、300K QPS、22 MB/S 的串流資料](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)<br/>[可擴展的 Timelines](https://www.infoq.com/presentations/Twitter-Timeline-Scalability)<br/>[Twitter 的大大小小的資料](https://www.youtube.com/watch?v=5cKTP36HVgI)<br/>[Twitter 的運營:擴展超過一億個使用者](https://www.youtube.com/watch?v=z8LU0Cj6BOU) |
| Uber | [Uber 是如何擴展他們的及時行銷平台](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) |
| WhatsApp | [讓 Facebook 用 $190 億購買下來的 WhatsApp 的架構](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| YouTube | [YouTube 的可擴展性](https://www.youtube.com/watch?v=w5WVu624fY8)[YouTube 的架構](http://highscalability.com/youtube-architecture) |
### 公司的工程部落格
@ -1698,60 +1698,60 @@ Notes
>
> 你被問到的問題可能就來自於相同領域的問題
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [Autodesk Engineering](http://cloudengineering.autodesk.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://www.box.com/blog/engineering/)
* [Cloudera Developer Blog](http://blog.cloudera.com/blog/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](http://engineering.quora.com/)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://devblog.paypal.com/category/engineering/)
* [Pinterest Engineering Blog](http://engineering.pinterest.com/)
* [Quora Engineering](https://engineering.quora.com/)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://engineering.twitter.com/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
#### 來源及延伸閱讀
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
## 仍在進行中
有興趣增加一些內容,或幫忙完善某些部分嗎? [來貢獻吧](#如何貢獻) !
有興趣增加一些內容,或幫忙完善某些部分嗎? [來貢獻吧](#如何貢獻)!
* 使用 MapReduce 進行分散式運算
* 一致性的 hashing
* 直接記憶體存取
* [貢獻](#如何貢獻)
* [貢獻](#如何貢獻)
## 致謝
@ -1759,15 +1759,15 @@ Notes
特別感謝:
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## 聯絡資訊
@ -1777,10 +1777,10 @@ Notes
## 授權
*我已開放原始碼授權的方式提供你在此儲存庫中的程式碼和資源。因為這是我個人的儲存庫,所以你所以所收到的使用許可是來自於我,並非我的雇主(Facebook) *
*我已開放原始碼授權的方式提供你在此儲存庫中的程式碼和資源。因為這是我個人的儲存庫,所以你所以所收到的使用許可是來自於我,並非我的雇主(Facebook)*
Copyright 2017 Donne Martin
Creative Commons Attribution 4.0 International License (CC BY 4.0)
Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/

832
README.md
View File

@ -1,4 +1,4 @@
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28) *
*[English](README.md) ∙ [日本語](README-ja.md) ∙ [简体中文](README-zh-Hans.md) ∙ [繁體中文](README-zh-TW.md) | [العَرَبِيَّة‎](https://github.com/donnemartin/system-design-primer/issues/170) ∙ [বাংলা](https://github.com/donnemartin/system-design-primer/issues/220) ∙ [Português do Brasil](https://github.com/donnemartin/system-design-primer/issues/40) ∙ [Deutsch](https://github.com/donnemartin/system-design-primer/issues/186) ∙ [ελληνικά](https://github.com/donnemartin/system-design-primer/issues/130) ∙ [עברית](https://github.com/donnemartin/system-design-primer/issues/272) ∙ [Italiano](https://github.com/donnemartin/system-design-primer/issues/104) ∙ [한국어](https://github.com/donnemartin/system-design-primer/issues/102) ∙ [فارسی](https://github.com/donnemartin/system-design-primer/issues/110) ∙ [Polski](https://github.com/donnemartin/system-design-primer/issues/68) ∙ [русский язык](https://github.com/donnemartin/system-design-primer/issues/87) ∙ [Español](https://github.com/donnemartin/system-design-primer/issues/136) ∙ [ภาษาไทย](https://github.com/donnemartin/system-design-primer/issues/187) ∙ [Türkçe](https://github.com/donnemartin/system-design-primer/issues/39) ∙ [tiếng Việt](https://github.com/donnemartin/system-design-primer/issues/127) ∙ [Français](https://github.com/donnemartin/system-design-primer/issues/250) | [Add Translation](https://github.com/donnemartin/system-design-primer/issues/28)*
**Help [translate](TRANSLATIONS.md) this guide!**
@ -37,11 +37,11 @@ In addition to coding interviews, system design is a **required component** of t
Additional topics for interview prep:
* [Study guide](#study-guide)
* [How to approach a system design interview question](#how-to-approach-a-system-design-interview-question)
* [System design interview questions, **with solutions**](#system-design-interview-questions-with-solutions)
* [Object-oriented design interview questions, **with solutions**](#object-oriented-design-interview-questions-with-solutions)
* [Additional system design interview questions](#additional-system-design-interview-questions)
* [Study guide](#study-guide)
* [How to approach a system design interview question](#how-to-approach-a-system-design-interview-question)
* [System design interview questions, **with solutions**](#system-design-interview-questions-with-solutions)
* [Object-oriented design interview questions, **with solutions**](#object-oriented-design-interview-questions-with-solutions)
* [Additional system design interview questions](#additional-system-design-interview-questions)
## Anki flashcards
@ -52,24 +52,24 @@ Additional topics for interview prep:
The provided [Anki flashcard decks](https://apps.ankiweb.net/) use spaced repetition to help you retain key system design concepts.
* [System design deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/System%20Design.apkg)
* [System design exercises deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/System%20Design%20Exercises.apkg)
* [Object oriented design exercises deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/OO%20Design.apkg)
* [System design deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/System%20Design.apkg)
* [System design exercises deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/System%20Design%20Exercises.apkg)
* [Object oriented design exercises deck](https://github.com/donnemartin/system-design-primer/tree/master/resources/flash_cards/OO%20Design.apkg)
Great for use while on-the-go.
### Coding Resource: Interactive Coding Challenges
Looking for resources to help you prep for the [**Coding Interview**](https://github.com/donnemartin/interactive-coding-challenges) ?
Looking for resources to help you prep for the [**Coding Interview**](https://github.com/donnemartin/interactive-coding-challenges)?
<p align="center">
<img src="images/b4YtAEN.png">
<br/>
</p>
Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges) , which contains an additional Anki deck:
Check out the sister repo [**Interactive Coding Challenges**](https://github.com/donnemartin/interactive-coding-challenges), which contains an additional Anki deck:
* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
* [Coding deck](https://github.com/donnemartin/interactive-coding-challenges/tree/master/anki_cards/Coding.apkg)
## Contributing
@ -80,11 +80,11 @@ Feel free to submit pull requests to help:
* Fix errors
* Improve sections
* Add new sections
* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
* [Translate](https://github.com/donnemartin/system-design-primer/issues/28)
Content that needs some polishing is placed [under development](#under-development) .
Content that needs some polishing is placed [under development](#under-development).
Review the [Contributing Guidelines](CONTRIBUTING.md) .
Review the [Contributing Guidelines](CONTRIBUTING.md).
## Index of system design topics
@ -97,93 +97,93 @@ Review the [Contributing Guidelines](CONTRIBUTING.md) .
<br/>
</p>
* [System design topics: start here](#system-design-topics-start-here)
* [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
* [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
* [Next steps](#next-steps)
* [Performance vs scalability](#performance-vs-scalability)
* [Latency vs throughput](#latency-vs-throughput)
* [Availability vs consistency](#availability-vs-consistency)
* [CAP theorem](#cap-theorem)
* [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
* [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
* [Consistency patterns](#consistency-patterns)
* [Weak consistency](#weak-consistency)
* [Eventual consistency](#eventual-consistency)
* [Strong consistency](#strong-consistency)
* [Availability patterns](#availability-patterns)
* [Fail-over](#fail-over)
* [Replication](#replication)
* [Availability in numbers](#availability-in-numbers)
* [Domain name system](#domain-name-system)
* [Content delivery network](#content-delivery-network)
* [Push CDNs](#push-cdns)
* [Pull CDNs](#pull-cdns)
* [Load balancer](#load-balancer)
* [Active-passive](#active-passive)
* [Active-active](#active-active)
* [Layer 4 load balancing](#layer-4-load-balancing)
* [Layer 7 load balancing](#layer-7-load-balancing)
* [Horizontal scaling](#horizontal-scaling)
* [Reverse proxy (web server) ](#reverse-proxy-web-server)
* [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
* [Application layer](#application-layer)
* [Microservices](#microservices)
* [Service discovery](#service-discovery)
* [Database](#database)
* [Relational database management system (RDBMS) ](#relational-database-management-system-rdbms)
* [Master-slave replication](#master-slave-replication)
* [Master-master replication](#master-master-replication)
* [Federation](#federation)
* [Sharding](#sharding)
* [Denormalization](#denormalization)
* [SQL tuning](#sql-tuning)
* [NoSQL](#nosql)
* [Key-value store](#key-value-store)
* [Document store](#document-store)
* [Wide column store](#wide-column-store)
* [Graph Database](#graph-database)
* [SQL or NoSQL](#sql-or-nosql)
* [Cache](#cache)
* [Client caching](#client-caching)
* [CDN caching](#cdn-caching)
* [Web server caching](#web-server-caching)
* [Database caching](#database-caching)
* [Application caching](#application-caching)
* [Caching at the database query level](#caching-at-the-database-query-level)
* [Caching at the object level](#caching-at-the-object-level)
* [When to update the cache](#when-to-update-the-cache)
* [Cache-aside](#cache-aside)
* [Write-through](#write-through)
* [Write-behind (write-back) ](#write-behind-write-back)
* [Refresh-ahead](#refresh-ahead)
* [Asynchronism](#asynchronism)
* [Message queues](#message-queues)
* [Task queues](#task-queues)
* [Back pressure](#back-pressure)
* [Communication](#communication)
* [Transmission control protocol (TCP) ](#transmission-control-protocol-tcp)
* [User datagram protocol (UDP) ](#user-datagram-protocol-udp)
* [Remote procedure call (RPC) ](#remote-procedure-call-rpc)
* [Representational state transfer (REST) ](#representational-state-transfer-rest)
* [Security](#security)
* [Appendix](#appendix)
* [Powers of two table](#powers-of-two-table)
* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
* [Additional system design interview questions](#additional-system-design-interview-questions)
* [Real world architectures](#real-world-architectures)
* [Company architectures](#company-architectures)
* [Company engineering blogs](#company-engineering-blogs)
* [Under development](#under-development)
* [Credits](#credits)
* [Contact info](#contact-info)
* [License](#license)
* [System design topics: start here](#system-design-topics-start-here)
* [Step 1: Review the scalability video lecture](#step-1-review-the-scalability-video-lecture)
* [Step 2: Review the scalability article](#step-2-review-the-scalability-article)
* [Next steps](#next-steps)
* [Performance vs scalability](#performance-vs-scalability)
* [Latency vs throughput](#latency-vs-throughput)
* [Availability vs consistency](#availability-vs-consistency)
* [CAP theorem](#cap-theorem)
* [CP - consistency and partition tolerance](#cp---consistency-and-partition-tolerance)
* [AP - availability and partition tolerance](#ap---availability-and-partition-tolerance)
* [Consistency patterns](#consistency-patterns)
* [Weak consistency](#weak-consistency)
* [Eventual consistency](#eventual-consistency)
* [Strong consistency](#strong-consistency)
* [Availability patterns](#availability-patterns)
* [Fail-over](#fail-over)
* [Replication](#replication)
* [Availability in numbers](#availability-in-numbers)
* [Domain name system](#domain-name-system)
* [Content delivery network](#content-delivery-network)
* [Push CDNs](#push-cdns)
* [Pull CDNs](#pull-cdns)
* [Load balancer](#load-balancer)
* [Active-passive](#active-passive)
* [Active-active](#active-active)
* [Layer 4 load balancing](#layer-4-load-balancing)
* [Layer 7 load balancing](#layer-7-load-balancing)
* [Horizontal scaling](#horizontal-scaling)
* [Reverse proxy (web server)](#reverse-proxy-web-server)
* [Load balancer vs reverse proxy](#load-balancer-vs-reverse-proxy)
* [Application layer](#application-layer)
* [Microservices](#microservices)
* [Service discovery](#service-discovery)
* [Database](#database)
* [Relational database management system (RDBMS)](#relational-database-management-system-rdbms)
* [Master-slave replication](#master-slave-replication)
* [Master-master replication](#master-master-replication)
* [Federation](#federation)
* [Sharding](#sharding)
* [Denormalization](#denormalization)
* [SQL tuning](#sql-tuning)
* [NoSQL](#nosql)
* [Key-value store](#key-value-store)
* [Document store](#document-store)
* [Wide column store](#wide-column-store)
* [Graph Database](#graph-database)
* [SQL or NoSQL](#sql-or-nosql)
* [Cache](#cache)
* [Client caching](#client-caching)
* [CDN caching](#cdn-caching)
* [Web server caching](#web-server-caching)
* [Database caching](#database-caching)
* [Application caching](#application-caching)
* [Caching at the database query level](#caching-at-the-database-query-level)
* [Caching at the object level](#caching-at-the-object-level)
* [When to update the cache](#when-to-update-the-cache)
* [Cache-aside](#cache-aside)
* [Write-through](#write-through)
* [Write-behind (write-back)](#write-behind-write-back)
* [Refresh-ahead](#refresh-ahead)
* [Asynchronism](#asynchronism)
* [Message queues](#message-queues)
* [Task queues](#task-queues)
* [Back pressure](#back-pressure)
* [Communication](#communication)
* [Transmission control protocol (TCP)](#transmission-control-protocol-tcp)
* [User datagram protocol (UDP)](#user-datagram-protocol-udp)
* [Remote procedure call (RPC)](#remote-procedure-call-rpc)
* [Representational state transfer (REST)](#representational-state-transfer-rest)
* [Security](#security)
* [Appendix](#appendix)
* [Powers of two table](#powers-of-two-table)
* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
* [Additional system design interview questions](#additional-system-design-interview-questions)
* [Real world architectures](#real-world-architectures)
* [Company architectures](#company-architectures)
* [Company engineering blogs](#company-engineering-blogs)
* [Under development](#under-development)
* [Credits](#credits)
* [Contact info](#contact-info)
* [License](#license)
## Study guide
> Suggested topics to review based on your interview timeline (short, medium, long) .
> Suggested topics to review based on your interview timeline (short, medium, long).
![Imgur](images/OfVllex.png)
![Imgur](images/OfVllex.png)
**Q: For interviews, do I need to know everything here?**
@ -245,10 +245,10 @@ Outline a high level design with all important components.
### Step 3: Design core components
Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md) , discuss:
Dive into details for each core component. For example, if you were asked to [design a url shortening service](solutions/system_design/pastebin/README.md), discuss:
* Generating and storing a hash of the full url
* [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
* [MD5](solutions/system_design/pastebin/README.md) and [Base62](solutions/system_design/pastebin/README.md)
* Hash collisions
* SQL or NoSQL
* Database schema
@ -265,24 +265,24 @@ Identify and address bottlenecks, given the constraints. For example, do you ne
* Caching
* Database sharding
Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics) .
Discuss potential solutions and trade-offs. Everything is a trade-off. Address bottlenecks using [principles of scalable system design](#index-of-system-design-topics).
### Back-of-the-envelope calculations
You might be asked to do some estimates by hand. Refer to the [Appendix](#appendix) for the following resources:
* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [Powers of two table](#powers-of-two-table)
* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
* [Use back of the envelope calculations](http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html)
* [Powers of two table](#powers-of-two-table)
* [Latency numbers every programmer should know](#latency-numbers-every-programmer-should-know)
### Source(s) and further reading
Check out the following links to get a better idea of what to expect:
* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [The system design interview](http://www.hiredintech.com/system-design)
* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
* [How to ace a systems design interview](https://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/)
* [The system design interview](http://www.hiredintech.com/system-design)
* [Intro to Architecture and Systems Design Interviews](https://www.youtube.com/watch?v=ZgdS0EUmn70)
* [System design template](https://leetcode.com/discuss/career/229177/My-System-Design-Template)
## System design interview questions with solutions
@ -302,53 +302,53 @@ Check out the following links to get a better idea of what to expect:
| Design a system that scales to millions of users on AWS | [Solution](solutions/system_design/scaling_aws/README.md) |
| Add a system design question | [Contribute](#contributing) |
### Design Pastebin.com (or Bit.ly)
### Design Pastebin.com (or Bit.ly)
[View exercise and solution](solutions/system_design/pastebin/README.md)
[View exercise and solution](solutions/system_design/pastebin/README.md)
![Imgur](images/4edXG0T.png)
![Imgur](images/4edXG0T.png)
### Design the Twitter timeline and search (or Facebook feed and search)
### Design the Twitter timeline and search (or Facebook feed and search)
[View exercise and solution](solutions/system_design/twitter/README.md)
[View exercise and solution](solutions/system_design/twitter/README.md)
![Imgur](images/jrUBAF7.png)
![Imgur](images/jrUBAF7.png)
### Design a web crawler
[View exercise and solution](solutions/system_design/web_crawler/README.md)
[View exercise and solution](solutions/system_design/web_crawler/README.md)
![Imgur](images/bWxPtQA.png)
![Imgur](images/bWxPtQA.png)
### Design Mint.com
[View exercise and solution](solutions/system_design/mint/README.md)
[View exercise and solution](solutions/system_design/mint/README.md)
![Imgur](images/V5q57vU.png)
![Imgur](images/V5q57vU.png)
### Design the data structures for a social network
[View exercise and solution](solutions/system_design/social_graph/README.md)
[View exercise and solution](solutions/system_design/social_graph/README.md)
![Imgur](images/cdCv5g7.png)
![Imgur](images/cdCv5g7.png)
### Design a key-value store for a search engine
[View exercise and solution](solutions/system_design/query_cache/README.md)
[View exercise and solution](solutions/system_design/query_cache/README.md)
![Imgur](images/4j99mhe.png)
![Imgur](images/4j99mhe.png)
### Design Amazon's sales ranking by category feature
[View exercise and solution](solutions/system_design/sales_rank/README.md)
[View exercise and solution](solutions/system_design/sales_rank/README.md)
![Imgur](images/MzExP06.png)
![Imgur](images/MzExP06.png)
### Design a system that scales to millions of users on AWS
[View exercise and solution](solutions/system_design/scaling_aws/README.md)
[View exercise and solution](solutions/system_design/scaling_aws/README.md)
![Imgur](images/jj3A5N8.png)
![Imgur](images/jj3A5N8.png)
## Object-oriented design interview questions with solutions
@ -360,13 +360,13 @@ Check out the following links to get a better idea of what to expect:
| Question | |
|---|---|
| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| Design a circular array | [Contribute](#contributing) |
| Design a hash map | [Solution](solutions/object_oriented_design/hash_table/hash_map.ipynb) |
| Design a least recently used cache | [Solution](solutions/object_oriented_design/lru_cache/lru_cache.ipynb) |
| Design a call center | [Solution](solutions/object_oriented_design/call_center/call_center.ipynb) |
| Design a deck of cards | [Solution](solutions/object_oriented_design/deck_of_cards/deck_of_cards.ipynb) |
| Design a parking lot | [Solution](solutions/object_oriented_design/parking_lot/parking_lot.ipynb) |
| Design a chat server | [Solution](solutions/object_oriented_design/online_chat/online_chat.ipynb) |
| Design a circular array | [Contribute](#contributing) |
| Add an object-oriented design question | [Contribute](#contributing) |
## System design topics: start here
@ -377,7 +377,7 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 1: Review the scalability video lecture
[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
[Scalability Lecture at Harvard](https://www.youtube.com/watch?v=-W9F__D3oY4)
* Topics covered:
* Vertical scaling
@ -389,13 +389,13 @@ First, you'll need a basic understanding of common principles, learning about wh
### Step 2: Review the scalability article
[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
[Scalability](http://www.lecloud.net/tagged/scalability/chrono)
* Topics covered:
* [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
* [Clones](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Databases](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [Caches](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [Asynchronism](http://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism)
### Next steps
@ -420,8 +420,8 @@ Another way to look at performance vs scalability:
### Source(s) and further reading
* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [A word on scalability](http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html)
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
## Latency vs throughput
@ -433,7 +433,7 @@ Generally, you should aim for **maximal throughput** with **acceptable latency**
### Source(s) and further reading
* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
## Availability vs consistency
@ -465,10 +465,10 @@ AP is a good choice if the business needs allow for [eventual consistency](#even
### Source(s) and further reading
* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
* [CAP FAQ](https://github.com/henryr/cap-faq)
* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
* [CAP theorem revisited](http://robertgreiner.com/2014/08/cap-theorem-revisited/)
* [A plain english introduction to CAP theorem](http://ksat.me/a-plain-english-introduction-to-cap-theorem)
* [CAP FAQ](https://github.com/henryr/cap-faq)
* [The CAP theorem](https://www.youtube.com/watch?v=k-Yaq8AHlFA)
## Consistency patterns
@ -482,7 +482,7 @@ This approach is seen in systems such as memcached. Weak consistency works well
### Eventual consistency
After a write, reads will eventually see it (typically within milliseconds) . Data is replicated asynchronously.
After a write, reads will eventually see it (typically within milliseconds). Data is replicated asynchronously.
This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.
@ -494,7 +494,7 @@ This approach is seen in file systems and RDBMSes. Strong consistency works wel
### Source(s) and further reading
* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
* [Transactions across data centers](http://snarfed.org/transactions_across_datacenters_io.html)
## Availability patterns
@ -518,7 +518,7 @@ If the servers are public-facing, the DNS would need to know about the public IP
Active-active failover can also be referred to as master-master failover.
### Disadvantage(s) : failover
### Disadvantage(s): failover
* Fail-over adds more hardware and additional complexity.
* There is a potential for loss of data if the active system fails before any newly written data can be replicated to the passive.
@ -529,8 +529,8 @@ Active-active failover can also be referred to as master-master failover.
This topic is further discussed in the [Database](#database) section:
* [Master-slave replication](#master-slave-replication)
* [Master-master replication](#master-master-replication)
* [Master-slave replication](#master-slave-replication)
* [Master-master replication](#master-master-replication)
### Availability in numbers
@ -563,7 +563,7 @@ If a service consists of multiple components prone to failure, the service's ove
Overall availability decreases when two components with availability < 100% are in sequence:
```
Availability (Total) = Availability (Foo) * Availability (Bar)
Availability (Total) = Availability (Foo) * Availability (Bar)
```
If both `Foo` and `Bar` each had 99.9% availability, their total availability in sequence would be 99.8%.
@ -588,33 +588,33 @@ If both `Foo` and `Bar` each had 99.9% availability, their total availability in
A Domain Name System (DNS) translates a domain name such as www.example.com to an IP address.
DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) .
DNS is hierarchical, with a few authoritative servers at the top level. Your router or ISP provides information about which DNS server(s) to contact when doing a lookup. Lower level DNS servers cache mappings, which could become stale due to DNS propagation delays. DNS results can also be cached by your browser or OS for a certain period of time, determined by the [time to live (TTL)](https://en.wikipedia.org/wiki/Time_to_live).
* **NS record (name server) ** - Specifies the DNS servers for your domain/subdomain.
* **MX record (mail exchange) ** - Specifies the mail servers for accepting messages.
* **A record (address) ** - Points a name to an IP address.
* **CNAME (canonical) ** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
* **NS record (name server)** - Specifies the DNS servers for your domain/subdomain.
* **MX record (mail exchange)** - Specifies the mail servers for accepting messages.
* **A record (address)** - Points a name to an IP address.
* **CNAME (canonical)** - Points a name to another name or `CNAME` (example.com to www.example.com) or to an `A` record.
Services such as [CloudFlare](https://www.cloudflare.com/dns/) and [Route 53](https://aws.amazon.com/route53/) provide managed DNS services. Some DNS services can route traffic through various methods:
* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* [Weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* Prevent traffic from going to servers under maintenance
* Balance between varying cluster sizes
* A/B testing
* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
* [Latency-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency)
* [Geolocation-based](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo)
### Disadvantage(s) : DNS
### Disadvantage(s): DNS
* Accessing a DNS server introduces a slight delay, although mitigated by caching described above.
* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729) .
* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/) , preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es) .
* DNS server management could be complex and is generally managed by [governments, ISPs, and large companies](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729).
* DNS services have recently come under [DDoS attack](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/), preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es).
### Source(s) and further reading
* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10) .aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS articles](https://support.dnsimple.com/categories/dns/)
* [DNS architecture](https://technet.microsoft.com/en-us/library/dd197427(v=ws.10).aspx)
* [Wikipedia](https://en.wikipedia.org/wiki/Domain_Name_System)
* [DNS articles](https://support.dnsimple.com/categories/dns/)
## Content delivery network
@ -641,11 +641,11 @@ Sites with a small amount of traffic or sites with content that isn't often upda
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the CDN.
A [time-to-live (TTL) ](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
A [time-to-live (TTL)](https://en.wikipedia.org/wiki/Time_to_live) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed.
Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
### Disadvantage(s) : CDN
### Disadvantage(s): CDN
* CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
* Content might be stale if it is updated before the TTL expires it.
@ -653,9 +653,9 @@ Sites with heavy traffic work well with pull CDNs, as traffic is spread out more
### Source(s) and further reading
* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
* [Globally distributed content delivery](https://figshare.com/articles/Globally_distributed_content_delivery/6605972)
* [The differences between push and pull CDNs](http://www.travelblogadvice.com/technical/the-differences-between-push-and-pull-cdns/)
* [Wikipedia](https://en.wikipedia.org/wiki/Content_delivery_network)
## Load balancer
@ -686,13 +686,13 @@ Load balancers can route traffic based on various metrics, including:
* Random
* Least loaded
* Session/cookies
* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* [Layer 4](#layer-4-load-balancing)
* [Layer 7](#layer-7-load-balancing)
* [Round robin or weighted round robin](https://www.g33kinfo.com/info/round-robin-vs-weighted-round-robin-lb)
* [Layer 4](#layer-4-load-balancing)
* [Layer 7](#layer-7-load-balancing)
### Layer 4 load balancing
Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT) ](https://www.nginx.com/resources/glossary/layer-4-load-balancing/) .
Layer 4 load balancers look at info at the [transport layer](#communication) to decide how to distribute requests. Generally, this involves the source, destination IP addresses, and ports in the header, but not the contents of the packet. Layer 4 load balancers forward network packets to and from the upstream server, performing [Network Address Translation (NAT)](https://www.nginx.com/resources/glossary/layer-4-load-balancing/).
### Layer 7 load balancing
@ -704,14 +704,14 @@ At the cost of flexibility, layer 4 load balancing requires less time and comput
Load balancers can also help with horizontal scaling, improving performance and availability. Scaling out using commodity machines is more cost efficient and results in higher availability than scaling up a single server on more expensive hardware, called **Vertical Scaling**. It is also easier to hire for talent working on commodity hardware than it is for specialized enterprise systems.
#### Disadvantage(s) : horizontal scaling
#### Disadvantage(s): horizontal scaling
* Scaling horizontally introduces complexity and involves cloning servers
* Servers should be stateless: they should not contain any user-related data like sessions or profile pictures
* Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
* Sessions can be stored in a centralized data store such as a [database](#database) (SQL, NoSQL) or a persistent [cache](#cache) (Redis, Memcached)
* Downstream servers such as caches and databases need to handle more simultaneous connections as upstream servers scale out
### Disadvantage(s) : load balancer
### Disadvantage(s): load balancer
* The load balancer can become a performance bottleneck if it does not have enough resources or if it is not configured properly.
* Introducing a load balancer to help eliminate a single point of failure results in increased complexity.
@ -719,15 +719,15 @@ Load balancers can also help with horizontal scaling, improving performance and
### Source(s) and further reading
* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Scalability](http://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones)
* [Wikipedia](https://en.wikipedia.org/wiki/Load_balancing_(computing))
* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
* [Layer 4 load balancing](https://www.nginx.com/resources/glossary/layer-4-load-balancing/)
* [Layer 7 load balancing](https://www.nginx.com/resources/glossary/layer-7-load-balancing/)
* [ELB listener config](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html)
## Reverse proxy (web server)
## Reverse proxy (web server)
<p align="center">
<img src="images/n41Azff.png">
@ -758,17 +758,17 @@ Additional benefits include:
* Reverse proxies can be useful even with just one web server or application server, opening up the benefits described in the previous section.
* Solutions such as NGINX and HAProxy can support both layer 7 reverse proxying and load balancing.
### Disadvantage(s) : reverse proxy
### Disadvantage(s): reverse proxy
* Introducing a reverse proxy results in increased complexity.
* A single reverse proxy is a single point of failure, configuring multiple reverse proxies (ie a [failover](https://en.wikipedia.org/wiki/Failover)) further increases complexity.
### Source(s) and further reading
* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
* [Reverse proxy vs load balancer](https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/)
* [NGINX architecture](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)
* [HAProxy architecture guide](http://www.haproxy.org/download/1.2/doc/architecture.txt)
* [Wikipedia](https://en.wikipedia.org/wiki/Reverse_proxy)
## Application layer
@ -780,30 +780,30 @@ Additional benefits include:
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers. The **single responsibility principle** advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
Workers in the application layer also help enable [asynchronism](#asynchronism) .
Workers in the application layer also help enable [asynchronism](#asynchronism).
### Microservices
Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices) , which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. <sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
Related to this discussion are [microservices](https://en.wikipedia.org/wiki/Microservices), which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. <sup><a href=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
### Service Discovery
Systems such as [Consul](https://www.consul.io/docs/index.html) , [Etcd](https://coreos.com/etcd/docs/latest) , and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
Systems such as [Consul](https://www.consul.io/docs/index.html), [Etcd](https://coreos.com/etcd/docs/latest), and [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) can help services find each other by keeping track of registered names, addresses, and ports. [Health checks](https://www.consul.io/intro/getting-started/checks.html) help verify service integrity and are often done using an [HTTP](#hypertext-transfer-protocol-http) endpoint. Both Consul and Etcd have a built in [key-value store](#key-value-store) that can be useful for storing config values and other shared data.
### Disadvantage(s) : application layer
### Disadvantage(s): application layer
* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system) .
* Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system).
* Microservices can add complexity in terms of deployments and operations.
### Source(s) and further reading
* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
* [Intro to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Service oriented architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture)
* [Introduction to Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper)
* [Here's what you need to know about building microservices](https://cloudncode.wordpress.com/2016/07/22/msa-getting-started/)
## Database
@ -813,11 +813,11 @@ Systems such as [Consul](https://www.consul.io/docs/index.html) , [Etcd](https:/
<i><a href=https://www.youtube.com/watch?v=kKjm4ehYiMs>Source: Scaling up to your first 10 million users</a></i>
</p>
### Relational database management system (RDBMS)
### Relational database management system (RDBMS)
A relational database like SQL is a collection of data items organized in tables.
**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction) .
**ACID** is a set of properties of relational database [transactions](https://en.wikipedia.org/wiki/Database_transaction).
* **Atomicity** - Each transaction is all or nothing
* **Consistency** - Any transaction will bring the database from one valid state to another
@ -836,10 +836,10 @@ The master serves reads and writes, replicating writes to one or more slaves, wh
<i><a href=http://www.slideshare.net/jboner/scalability-availability-stability-patterns/>Source: Scalability, availability, stability, patterns</a></i>
</p>
##### Disadvantage(s) : master-slave replication
##### Disadvantage(s): master-slave replication
* Additional logic is needed to promote a slave to a master.
* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
#### Master-master replication
@ -851,14 +851,14 @@ Both masters serve reads and writes and coordinate with each other on writes. I
<i><a href=http://www.slideshare.net/jboner/scalability-availability-stability-patterns/>Source: Scalability, availability, stability, patterns</a></i>
</p>
##### Disadvantage(s) : master-master replication
##### Disadvantage(s): master-master replication
* You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
* Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
* Conflict resolution comes more into play as more write nodes are added and as latency increases.
* See [Disadvantage(s) : replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
* See [Disadvantage(s): replication](#disadvantages-replication) for points related to **both** master-slave and master-master.
##### Disadvantage(s) : replication
##### Disadvantage(s): replication
* There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
* Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
@ -868,8 +868,8 @@ Both masters serve reads and writes and coordinate with each other on writes. I
##### Source(s) and further reading: replication
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [Multi-master replication](https://en.wikipedia.org/wiki/Multi-master_replication)
#### Federation
@ -881,16 +881,16 @@ Both masters serve reads and writes and coordinate with each other on writes. I
Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: **forums**, **users**, and **products**, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
##### Disadvantage(s) : federation
##### Disadvantage(s): federation
* Federation is not effective if your schema requires huge functions or tables.
* You'll need to update your application logic to determine which database to read and write.
* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers) .
* Joining data from two databases is more complex with a [server link](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers).
* Federation adds more hardware and additional complexity.
##### Source(s) and further reading: federation
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
#### Sharding
@ -902,11 +902,11 @@ Federation (or functional partitioning) splits up databases by function. For ex
Sharding distributes data across different databases such that each database can only manage a subset of the data. Taking a users database as an example, as the number of users increases, more shards are added to the cluster.
Similar to the advantages of [federation](#federation) , sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
Similar to the advantages of [federation](#federation), sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
Common ways to shard a table of users is either through the user's last name initial or the user's geographic location.
##### Disadvantage(s) : sharding
##### Disadvantage(s): sharding
* You'll need to update your application logic to work with shards, which could result in complex SQL queries.
* Data distribution can become lopsided in a shard. For example, a set of power users on a shard could result in increased load to that shard compared to others.
@ -916,19 +916,19 @@ Common ways to shard a table of users is either through the user's last name ini
##### Source(s) and further reading: sharding
* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [The coming of the shard](http://highscalability.com/blog/2009/8/6/an-unorthodox-approach-to-database-design-the-coming-of-the.html)
* [Shard database architecture](https://en.wikipedia.org/wiki/Shard_(database_architecture))
* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
* [Consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html)
#### Denormalization
Denormalization attempts to improve read performance at the expense of some write performance. Redundant copies of the data are written in multiple tables to avoid expensive joins. Some RDBMS such as [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) and Oracle support [materialized views](https://en.wikipedia.org/wiki/Materialized_view) which handle the work of storing redundant information and keeping redundant copies consistent.
Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding) , managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
Once data becomes distributed with techniques such as [federation](#federation) and [sharding](#sharding), managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A read resulting in a complex database join can be very expensive, spending a significant amount of time on disk operations.
##### Disadvantage(s) : denormalization
##### Disadvantage(s): denormalization
* Data is duplicated.
* Constraints can help redundant copies of information stay in sync, which increases complexity of the database design.
@ -936,7 +936,7 @@ In most systems, reads can heavily outnumber writes 100:1 or even 1000:1. A rea
###### Source(s) and further reading: denormalization
* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
* [Denormalization](https://en.wikipedia.org/wiki/Denormalization)
#### SQL tuning
@ -944,7 +944,7 @@ SQL tuning is a broad topic and many [books](https://www.amazon.com/s/ref=nb_sb_
It's important to **benchmark** and **profile** to simulate and uncover bottlenecks.
* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) .
* **Benchmark** - Simulate high-load situations with tools such as [ab](http://httpd.apache.org/docs/2.2/programs/ab.html).
* **Profile** - Enable tools such as the [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) to help track performance issues.
Benchmarking and profiling might point you to the following optimizations.
@ -958,8 +958,8 @@ Benchmarking and profiling might point you to the following optimizations.
* Use `INT` for larger numbers up to 2^32 or 4 billion.
* Use `DECIMAL` for currency to avoid floating point representation errors.
* Avoid storing large `BLOBS`, store the location of where to get the object instead.
* `VARCHAR(255) ` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search) .
* `VARCHAR(255)` is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.
* Set the `NOT NULL` constraint where applicable to [improve search performance](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search).
##### Use good indices
@ -979,32 +979,32 @@ Benchmarking and profiling might point you to the following optimizations.
##### Tune the query cache
* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/) .
* In some cases, the [query cache](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) could lead to [performance issues](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/).
##### Source(s) and further reading: SQL tuning
* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
* [Tips for optimizing MySQL queries](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
* [Is there a good reason i see VARCHAR(255) used so often?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [How do null values affect performance?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
* [Slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)
### NoSQL
NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency) .
NoSQL is a collection of data items represented in a **key-value store**, **document store**, **wide column store**, or a **graph database**. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor [eventual consistency](#eventual-consistency).
**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem) , BASE chooses availability over consistency.
**BASE** is often used to describe the properties of NoSQL databases. In comparison with the [CAP Theorem](#cap-theorem), BASE chooses availability over consistency.
* **Basically available** - the system guarantees availability.
* **Soft state** - the state of the system may change over time, even without input.
* **Eventual consistency** - the system will become consistent over a period of time, given that the system doesn't receive input during that period.
In addition to choosing between [SQL or NoSQL](#sql-or-nosql) , it is helpful to understand which type of NoSQL database best fits your use case(s) . We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
In addition to choosing between [SQL or NoSQL](#sql-or-nosql), it is helpful to understand which type of NoSQL database best fits your use case(s). We'll review **key-value stores**, **document stores**, **wide column stores**, and **graph databases** in the next section.
#### Key-value store
> Abstraction: hash table
A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order) , allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in [lexicographic order](https://en.wikipedia.org/wiki/Lexicographical_order), allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
Key-value stores provide high performance and are often used for simple data models or for rapidly-changing data, such as an in-memory cache layer. Since they offer only a limited set of operations, complexity is shifted to the application layer if additional operations are needed.
@ -1012,16 +1012,16 @@ A key-value store is the basis for more complex systems such as a document store
##### Source(s) and further reading: key-value store
* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
* [Key-value database](https://en.wikipedia.org/wiki/Key-value_database)
* [Disadvantages of key-value stores](http://stackoverflow.com/questions/4056093/what-are-the-disadvantages-of-using-a-key-value-table-over-nullable-columns-or)
* [Redis architecture](http://qnimate.com/overview-of-redis-architecture/)
* [Memcached architecture](https://www.adayinthelifeof.nl/2011/02/06/memcache-internals/)
#### Document store
> Abstraction: key-value store with documents stored as values
A document store is centered around documents (XML, JSON, binary, etc) , where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
A document store is centered around documents (XML, JSON, binary, etc), where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. *Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.*
Based on the underlying implementation, documents are organized by collections, tags, metadata, or directories. Although documents can be organized or grouped together, documents may have fields that are completely different from each other.
@ -1031,10 +1031,10 @@ Document stores provide high flexibility and are often used for working with occ
##### Source(s) and further reading: document store
* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
* [Document-oriented database](https://en.wikipedia.org/wiki/Document-oriented_database)
* [MongoDB architecture](https://www.mongodb.com/mongodb-architecture)
* [CouchDB architecture](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/)
* [Elasticsearch architecture](https://www.elastic.co/blog/found-elasticsearch-from-the-bottom-up)
#### Wide column store
@ -1046,7 +1046,7 @@ Document stores provide high flexibility and are often used for working with occ
> Abstraction: nested map `ColumnFamily<RowKey, Columns<ColKey, Value, Timestamp>>`
A wide column store's basic unit of data is a column (name/value pair) . A column can be grouped in column families (analogous to a SQL table) . Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
A wide column store's basic unit of data is a column (name/value pair). A column can be grouped in column families (analogous to a SQL table). Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
Google introduced [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) as the first wide column store, which influenced the open-source [HBase](https://www.edureka.co/blog/hbase-architecture/) often-used in the Hadoop ecosystem, and [Cassandra](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html) from Facebook. Stores such as BigTable, HBase, and Cassandra maintain keys in lexicographic order, allowing efficient retrieval of selective key ranges.
@ -1054,10 +1054,10 @@ Wide column stores offer high availability and high scalability. They are often
##### Source(s) and further reading: wide column store
* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
* [SQL & NoSQL, a brief history](http://blog.grio.com/2015/11/sql-nosql-a-brief-history.html)
* [Bigtable architecture](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf)
* [HBase architecture](https://www.edureka.co/blog/hbase-architecture/)
* [Cassandra architecture](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html)
#### Graph database
@ -1071,21 +1071,21 @@ Wide column stores offer high availability and high scalability. They are often
In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest) .
Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with [REST APIs](#representational-state-transfer-rest).
##### Source(s) and further reading: graph
* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
* [Graph database](https://en.wikipedia.org/wiki/Graph_database)
* [Neo4j](https://neo4j.com/)
* [FlockDB](https://blog.twitter.com/2010/introducing-flockdb)
#### Source(s) and further reading: NoSQL
* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
* [Explanation of base terminology](http://stackoverflow.com/questions/3342497/explanation-of-base-terminology)
* [NoSQL databases a survey and decision guidance](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
* [Scalability](http://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database)
* [Introduction to NoSQL](https://www.youtube.com/watch?v=qI_g07C_Q5I)
* [NoSQL patterns](http://horicky.blogspot.com/2009/11/nosql-patterns.html)
### SQL or NoSQL
@ -1126,8 +1126,8 @@ Sample data well-suited for NoSQL:
##### Source(s) and further reading: SQL or NoSQL
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
* [Scaling up to your first 10 million users](https://www.youtube.com/watch?v=kKjm4ehYiMs)
* [SQL vs NoSQL differences](https://www.sitepoint.com/sql-vs-nosql-differences/)
## Cache
@ -1143,7 +1143,7 @@ Databases often benefit from a uniform distribution of reads and writes across i
### Client caching
Caches can be located on the client side (OS or browser) , [server side](#reverse-proxy-web-server) , or in a distinct cache layer.
Caches can be located on the client side (OS or browser), [server side](#reverse-proxy-web-server), or in a distinct cache layer.
### CDN caching
@ -1159,7 +1159,7 @@ Your database usually includes some level of caching in a default configuration,
### Application caching
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU) ](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
@ -1184,7 +1184,7 @@ Whenever you query the database, hash the query as a key and store the result to
### Caching at the object level
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s) :
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s):
* Remove the object from cache if its underlying data has changed
* Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
@ -1216,12 +1216,12 @@ The application is responsible for reading and writing from storage. The cache
* Return entry
```python
def get_user(self, user_id) :
user = cache.get("user.{0}", user_id)
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
@ -1230,7 +1230,7 @@ def get_user(self, user_id) :
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
##### Disadvantage(s) : cache-aside
##### Disadvantage(s): cache-aside
* Each cache miss results in three trips, which can cause a noticeable delay.
* Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
@ -1253,25 +1253,25 @@ The application uses the cache as the main data store, reading and writing data
Application code:
```python
set_user(12345, {"foo":"bar"})
set_user(12345, {"foo":"bar"})
```
Cache code:
```python
def set_user(user_id, values) :
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
```
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
##### Disadvantage(s) : write through
##### Disadvantage(s): write through
* When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue.
* Most data written might never be read, which can be minimized with a TTL.
#### Write-behind (write-back)
#### Write-behind (write-back)
<p align="center">
<img src="images/rgSrvjG.png">
@ -1284,7 +1284,7 @@ In write-behind, the application does the following:
* Add/update entry in cache
* Asynchronously write entry to the data store, improving write performance
##### Disadvantage(s) : write-behind
##### Disadvantage(s): write-behind
* There could be data loss if the cache goes down prior to its contents hitting the data store.
* It is more complex to implement write-behind than it is to implement cache-aside or write-through.
@ -1301,24 +1301,24 @@ You can configure the cache to automatically refresh any recently accessed cache
Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
##### Disadvantage(s) : refresh-ahead
##### Disadvantage(s): refresh-ahead
* Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
### Disadvantage(s) : cache
### Disadvantage(s): cache
* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) .
* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms).
* Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
* Need to make application changes such as adding Redis or memcached.
### Source(s) and further reading
* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
* [Scalability, availability, stability, patterns](http://www.slideshare.net/jboner/scalability-availability-stability-patterns/)
* [Scalability](http://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache)
* [AWS ElastiCache strategies](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.html)
* [Wikipedia](https://en.wikipedia.org/wiki/Cache_(computing))
## Asynchronism
@ -1340,32 +1340,32 @@ Message queues receive, hold, and deliver messages. If an operation is too slow
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
**[Redis](https://redis.io/) ** is useful as a simple message broker but messages can be lost.
**[Redis](https://redis.io/)** is useful as a simple message broker but messages can be lost.
**[RabbitMQ](https://www.rabbitmq.com/) ** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
**[RabbitMQ](https://www.rabbitmq.com/)** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
**[Amazon SQS](https://aws.amazon.com/sqs/) ** is hosted but can have high latency and has the possibility of messages being delivered twice.
**[Amazon SQS](https://aws.amazon.com/sqs/)** is hosted but can have high latency and has the possibility of messages being delivered twice.
### Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
**[Celery](https://docs.celeryproject.org/en/stable/) ** has support for scheduling and primarily has python support.
**[Celery](https://docs.celeryproject.org/en/stable/)** has support for scheduling and primarily has python support.
### Back pressure
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) .
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff).
### Disadvantage(s) : asynchronism
### Disadvantage(s): asynchronism
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
### Source(s) and further reading
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
## Communication
@ -1375,11 +1375,11 @@ If queues start to grow significantly, the queue size can become larger than mem
<i><a href=http://www.escotal.com/osilayer.html>Source: OSI 7 layer model</a></i>
</p>
### Hypertext transfer protocol (HTTP)
### Hypertext transfer protocol (HTTP)
HTTP is a method for encoding and transporting data between a client and a server. It is a request/response protocol: clients issue requests and servers issue responses with relevant content and completion status info about the request. HTTP is self-contained, allowing requests and responses to flow through many intermediate routers and servers that perform load balancing, caching, encryption, and compression.
A basic HTTP request consists of a verb (method) and a resource (endpoint) . Below are common HTTP verbs:
A basic HTTP request consists of a verb (method) and a resource (endpoint). Below are common HTTP verbs:
| Verb | Description | Idempotent* | Safe | Cacheable |
|---|---|---|---|---|
@ -1395,11 +1395,11 @@ HTTP is an application layer protocol relying on lower-level protocols such as *
#### Source(s) and further reading: HTTP
* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
### Transmission control protocol (TCP)
### Transmission control protocol (TCP)
<p align="center">
<img src="images/JdAsdvG.jpg">
@ -1407,12 +1407,12 @@ HTTP is an application layer protocol relying on lower-level protocols such as *
<i><a href=http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/>Source: How to make a multiplayer game</a></i>
</p>
TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol) . Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking) . All packets sent are guaranteed to reach the destination in the original order and without corruption through:
TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol). Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking). All packets sent are guaranteed to reach the destination in the original order and without corruption through:
* Sequence numbers and [checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) for each packet
* [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) packets and automatic retransmission
If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control) . These guarantees cause delays and generally result in less efficient transmission than UDP.
If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control). These guarantees cause delays and generally result in less efficient transmission than UDP.
To ensure high throughput, web servers can keep a large number of TCP connections open, resulting in high memory usage. It can be expensive to have a large number of open connections between web server threads and say, a [memcached](https://memcached.org/) server. [Connection pooling](https://en.wikipedia.org/wiki/Connection_pool) can help in addition to switching to UDP where applicable.
@ -1423,7 +1423,7 @@ Use TCP over UDP when:
* You need all of the data to arrive intact
* You want to automatically make a best estimate use of the network throughput
### User datagram protocol (UDP)
### User datagram protocol (UDP)
<p align="center">
<img src="images/yzDrJtA.jpg">
@ -1445,14 +1445,14 @@ Use UDP over TCP when:
#### Source(s) and further reading: TCP and UDP
* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [User datagram protocol](https://en.wikipedia.org/wiki/User_Datagram_Protocol)
* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
### Remote procedure call (RPC)
### Remote procedure call (RPC)
<p align="center">
<img src="images/iF4Mkb5.png">
@ -1460,7 +1460,7 @@ Use UDP over TCP when:
<i><a href=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>Source: Crack the system design interview</a></i>
</p>
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/) , [Thrift](https://thrift.apache.org/) , and [Avro](https://avro.apache.org/docs/current/) .
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/), [Thrift](https://thrift.apache.org/), and [Avro](https://avro.apache.org/docs/current/).
RPC is a request-response protocol:
@ -1494,23 +1494,23 @@ Choose a native library (aka SDK) when:
HTTP APIs following **REST** tend to be used more often for public APIs.
#### Disadvantage(s) : RPC
#### Disadvantage(s): RPC
* RPC clients become tightly coupled to the service implementation.
* A new API must be defined for every new operation or use case.
* It can be difficult to debug RPC.
* You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/) .
* You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/).
### Representational state transfer (REST)
### Representational state transfer (REST)
REST is an architectural style enforcing a client/server model where the client acts on a set of resources managed by the server. The server provides a representation of resources and actions that can either manipulate or get a new representation of resources. All communication must be stateless and cacheable.
There are four qualities of a RESTful interface:
* **Identify resources (URI in HTTP) ** - use the same URI regardless of any operation.
* **Change with representations (Verbs in HTTP) ** - use verbs, headers, and body.
* **Self-descriptive error message (status response in HTTP) ** - Use status codes, don't reinvent the wheel.
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP) ** - your web service should be fully accessible in a browser.
* **Identify resources (URI in HTTP)** - use the same URI regardless of any operation.
* **Change with representations (Verbs in HTTP)** - use verbs, headers, and body.
* **Self-descriptive error message (status response in HTTP)** - Use status codes, don't reinvent the wheel.
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP)** - your web service should be fully accessible in a browser.
Sample REST calls:
@ -1521,9 +1521,9 @@ PUT /someresources/anId
{"anotherdata": "another value"}
```
REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md) , and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md), and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
#### Disadvantage(s) : REST
#### Disadvantage(s): REST
* With REST being focused on exposing data, it might not be a good fit if resources are not naturally organized or accessed in a simple hierarchy. For example, returning all updated records from the past hour matching a particular set of events is not easily expressed as a path. With REST, it is likely to be implemented with a combination of URI path, query parameters, and possibly the request body.
* REST typically relies on a few verbs (GET, POST, PUT, DELETE, and PATCH) which sometimes doesn't fit your use case. For example, moving expired documents to the archive folder might not cleanly fit within these verbs.
@ -1548,31 +1548,31 @@ REST is focused on exposing data. It minimizes the coupling between client/serv
#### Source(s) and further reading: REST and RPC
* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
* [Do you really know why you prefer REST over RPC](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/)
* [When are RPC-ish approaches more appropriate than REST?](http://programmers.stackexchange.com/a/181186)
* [REST vs JSON-RPC](http://stackoverflow.com/questions/15056878/rest-vs-json-rpc)
* [Debunking the myths of RPC and REST](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/)
* [What are the drawbacks of using REST](https://www.quora.com/What-are-the-drawbacks-of-using-RESTful-APIs)
* [Crack the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Thrift](https://code.facebook.com/posts/1468950976659943/)
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## Security
This section could use some updates. Consider [contributing](#contributing) !
This section could use some updates. Consider [contributing](#contributing)!
Security is a broad topic. Unless you have considerable experience, a security background, or are applying for a position that requires knowledge of security, you probably won't need to know more than the basics:
* Encrypt in transit and at rest.
* Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection) .
* Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection).
* Use parameterized queries to prevent SQL injection.
* Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) .
* Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).
### Source(s) and further reading
* [API security checklist](https://github.com/shieldfy/API-Security-Checklist)
* [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
* [API security checklist](https://github.com/shieldfy/API-Security-Checklist)
* [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## Appendix
@ -1595,7 +1595,7 @@ Power Exact Value Approx Value Bytes
#### Source(s) and further reading
* [Powers of two](https://en.wikipedia.org/wiki/Power_of_two)
* [Powers of two](https://en.wikipedia.org/wiki/Power_of_two)
### Latency numbers every programmer should know
@ -1636,14 +1636,14 @@ Handy metrics based on numbers above:
#### Latency numbers visualized
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
![](https://camo.githubusercontent.com/77f72259e1eb58596b564d1ad823af1853bc60a3/687474703a2f2f692e696d6775722e636f6d2f6b307431652e706e67)
#### Source(s) and further reading
* [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
* [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
* [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
* [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
### Additional system design interview questions
@ -1652,28 +1652,28 @@ Handy metrics based on numbers above:
| Question | Reference(s) |
|---|---|
| Design a file sync service like Dropbox | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| Design a search engine like Google | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407) <br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search) <br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/) <br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Design a search engine like Google | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407)<br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search)<br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/)<br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Design a scalable web crawler like Google | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| Design Google docs | [code.google.com](https://code.google.com/p/google-mobwrite/) <br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Design Google docs | [code.google.com](https://code.google.com/p/google-mobwrite/)<br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Design a key-value store like Redis | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| Design a cache system like Memcached | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Design a recommendation system like Amazon's | [hulu.com](https://web.archive.org/web/20170406065247/http://tech.hulu.com/blog/2011/09/19/recommendation-system.html) <br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| Design a recommendation system like Amazon's | [hulu.com](https://web.archive.org/web/20170406065247/http://tech.hulu.com/blog/2011/09/19/recommendation-system.html)<br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| Design a tinyurl system like Bitly | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| Design a chat app like WhatsApp | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
| Design a picture sharing system like Instagram | [highscalability.com](http://highscalability.com/flickr-architecture) <br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| Design the Facebook news feed function | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed) <br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed) <br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| Design the Facebook timeline function | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920) <br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| Design the Facebook chat function | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf) <br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| Design a graph search function like Facebook's | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920) <br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| Design a chat app like WhatsApp | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
| Design a picture sharing system like Instagram | [highscalability.com](http://highscalability.com/flickr-architecture)<br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| Design the Facebook news feed function | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed)<br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed)<br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| Design the Facebook timeline function | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920)<br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| Design the Facebook chat function | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf)<br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| Design a graph search function like Facebook's | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| Design a content delivery network like CloudFlare | [figshare.com](https://figshare.com/articles/Globally_distributed_content_delivery/6605972) |
| Design a trending topic system like Twitter's | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) <br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| Design a random ID generation system | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake) <br/>[github.com](https://github.com/twitter/snowflake/) |
| Return the top k requests during a time interval | [cs.ucsb.edu](https://www.cs.ucsb.edu/sites/cs.ucsb.edu/files/docs/reports/2005-23.pdf) <br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| Design a trending topic system like Twitter's | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/)<br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| Design a random ID generation system | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake)<br/>[github.com](https://github.com/twitter/snowflake/) |
| Return the top k requests during a time interval | [cs.ucsb.edu](https://www.cs.ucsb.edu/sites/cs.ucsb.edu/files/docs/reports/2005-23.pdf)<br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| Design a system that serves data from multiple data centers | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| Design an online multiplayer card game | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html) <br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| Design a garbage collection system | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/) <br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| Design an online multiplayer card game | [indieflashblog.com](https://web.archive.org/web/20180929181117/http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html)<br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| Design a garbage collection system | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/)<br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| Design an API rate limiter | [https://stripe.com/blog/](https://stripe.com/blog/rate-limiters) |
| Design a Stock Exchange (like NASDAQ or Binance) | [Jane Street](https://youtu.be/b1e4t2k2KJY) <br/>[Golang Implementation](https://around25.com/blog/building-a-trading-engine-for-a-crypto-exchange/) <br/>[Go Implemenation](http://bhomnick.net/building-a-simple-limit-order-in-go/) |
| Design a Stock Exchange (like NASDAQ or Binance) | [Jane Street](https://youtu.be/b1e4t2k2KJY)<br/>[Golang Implementation](https://around25.com/blog/building-a-trading-engine-for-a-crypto-exchange/)<br/>[Go Implemenation](http://bhomnick.net/building-a-simple-limit-order-in-go/) |
| Add a system design question | [Contribute](#contributing) |
### Real world architectures
@ -1700,18 +1700,18 @@ Handy metrics based on numbers above:
| | | |
| Data store | **Bigtable** - Distributed column-oriented database from Google | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| Data store | **HBase** - Open source implementation of Bigtable | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
| Data store | **Cassandra** - Distributed column-oriented database from Facebook | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| Data store | **Cassandra** - Distributed column-oriented database from Facebook | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| Data store | **DynamoDB** - Document-oriented database from Amazon | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| Data store | **MongoDB** - Document-oriented database | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| Data store | **Spanner** - Globally-distributed database from Google | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| Data store | **Memcached** - Distributed memory caching system | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Data store | **Redis** - Distributed memory caching system with persistence and value types | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
| File system | **Google File System (GFS) ** - Distributed file system | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| File system | **Hadoop File System (HDFS) ** - Open source implementation of GFS | [apache.org](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) |
| File system | **Google File System (GFS)** - Distributed file system | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| File system | **Hadoop File System (HDFS)** - Open source implementation of GFS | [apache.org](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) |
| | | |
| Misc | **Chubby** - Lock service for loosely-coupled distributed systems from Google | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
| Misc | **Dapper** - Distributed systems tracing infrastructure | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
| Misc | **Dapper** - Distributed systems tracing infrastructure | [research.google.com](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
| Misc | **Kafka** - Pub/sub message queue from LinkedIn | [slideshare.net](http://www.slideshare.net/mumrah/kafka-talk-tri-hug) |
| Misc | **Zookeeper** - Centralized infrastructure and services enabling synchronization | [slideshare.net](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) |
| | Add an architecture | [Contribute](#contributing) |
@ -1726,23 +1726,23 @@ Handy metrics based on numbers above:
| DropBox | [How we've scaled Dropbox](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| ESPN | [Operating At 100,000 duh nuh nuhs per second](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google architecture](http://highscalability.com/google-architecture) |
| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) <br/>[What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html)<br/>[What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv's live video broadcasting architecture](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf) <br/>[TAO: Facebooks distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf) <br/>[Facebooks photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf) <br/>[How Facebook Live Streams To 800,000 Simultaneous Viewers](http://highscalability.com/blog/2016/6/27/how-facebook-live-streams-to-800000-simultaneous-viewers.html) |
| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf)<br/>[TAO: Facebooks distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf)<br/>[Facebooks photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf)<br/>[How Facebook Live Streams To 800,000 Simultaneous Viewers](http://highscalability.com/blog/2016/6/27/how-facebook-live-streams-to-800000-simultaneous-viewers.html) |
| Flickr | [Flickr architecture](http://highscalability.com/flickr-architecture) |
| Mailbox | [From 0 to one million users in 6 weeks](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Netflix | [A 360 Degree View Of The Entire Netflix Stack](http://highscalability.com/blog/2015/11/9/a-360-degree-view-of-the-entire-netflix-stack.html) <br/>[Netflix: What Happens When You Press Play?](http://highscalability.com/blog/2017/12/11/netflix-what-happens-when-you-press-play.html) |
| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html) <br/>[18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Netflix | [A 360 Degree View Of The Entire Netflix Stack](http://highscalability.com/blog/2015/11/9/a-360-degree-view-of-the-entire-netflix-stack.html)<br/>[Netflix: What Happens When You Press Play?](http://highscalability.com/blog/2017/12/11/netflix-what-happens-when-you-press-play.html) |
| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html)<br/>[18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [50 million monthly users and growing](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| PlentyOfFish | [PlentyOfFish architecture](http://highscalability.com/plentyoffish-architecture) |
| Salesforce | [How they handle 1.3 billion transactions a day](http://highscalability.com/blog/2013/9/23/salesforce-architecture-how-they-handle-13-billion-transacti.html) |
| Stack Overflow | [Stack Overflow architecture](http://highscalability.com/blog/2009/8/5/stack-overflow-architecture.html) |
| TripAdvisor | [40M visitors, 200M dynamic page views, 30TB data](http://highscalability.com/blog/2011/6/27/tripadvisor-architecture-40m-visitors-200m-dynamic-page-view.html) |
| Tumblr | [15 billion page views a month](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster) <br/>[Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html) <br/>[150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html) <br/>[Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability) <br/>[Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI) <br/>[Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU) <br/>[How Twitter Handles 3,000 Images Per Second](http://highscalability.com/blog/2016/4/20/how-twitter-handles-3000-images-per-second.html) |
| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html) <br/>[Lessons Learned From Scaling Uber To 2000 Engineers, 1000 Services, And 8000 Git Repositories](http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster)<br/>[Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html)<br/>[150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)<br/>[Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability)<br/>[Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI)<br/>[Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU)<br/>[How Twitter Handles 3,000 Images Per Second](http://highscalability.com/blog/2016/4/20/how-twitter-handles-3000-images-per-second.html) |
| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html)<br/>[Lessons Learned From Scaling Uber To 2000 Engineers, 1000 Services, And 8000 Git Repositories](http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html) |
| WhatsApp | [The WhatsApp architecture Facebook bought for $19 billion](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html) |
| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8) <br/>[YouTube architecture](http://highscalability.com/youtube-architecture) |
| YouTube | [YouTube scalability](https://www.youtube.com/watch?v=w5WVu624fY8)<br/>[YouTube architecture](http://highscalability.com/youtube-architecture) |
### Company engineering blogs
@ -1750,60 +1750,60 @@ Handy metrics based on numbers above:
>
> Questions you encounter might be from the same domain.
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://blog.box.com/blog/category/engineering)
* [Cloudera Developer Blog](http://blog.cloudera.com/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](https://www.quora.com/q/quoraengineering)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://medium.com/paypal-engineering)
* [Pinterest Engineering Blog](https://medium.com/@Pinterest_Engineering)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://blog.twitter.com/engineering/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
* [Airbnb Engineering](http://nerds.airbnb.com/)
* [Atlassian Developers](https://developer.atlassian.com/blog/)
* [AWS Blog](https://aws.amazon.com/blogs/aws/)
* [Bitly Engineering Blog](http://word.bitly.com/)
* [Box Blogs](https://blog.box.com/blog/category/engineering)
* [Cloudera Developer Blog](http://blog.cloudera.com/)
* [Dropbox Tech Blog](https://tech.dropbox.com/)
* [Engineering at Quora](https://www.quora.com/q/quoraengineering)
* [Ebay Tech Blog](http://www.ebaytechblog.com/)
* [Evernote Tech Blog](https://blog.evernote.com/tech/)
* [Etsy Code as Craft](http://codeascraft.com/)
* [Facebook Engineering](https://www.facebook.com/Engineering)
* [Flickr Code](http://code.flickr.net/)
* [Foursquare Engineering Blog](http://engineering.foursquare.com/)
* [GitHub Engineering Blog](http://githubengineering.com/)
* [Google Research Blog](http://googleresearch.blogspot.com/)
* [Groupon Engineering Blog](https://engineering.groupon.com/)
* [Heroku Engineering Blog](https://engineering.heroku.com/)
* [Hubspot Engineering Blog](http://product.hubspot.com/blog/topic/engineering)
* [High Scalability](http://highscalability.com/)
* [Instagram Engineering](http://instagram-engineering.tumblr.com/)
* [Intel Software Blog](https://software.intel.com/en-us/blogs/)
* [Jane Street Tech Blog](https://blogs.janestreet.com/category/ocaml/)
* [LinkedIn Engineering](http://engineering.linkedin.com/blog)
* [Microsoft Engineering](https://engineering.microsoft.com/)
* [Microsoft Python Engineering](https://blogs.msdn.microsoft.com/pythonengineering/)
* [Netflix Tech Blog](http://techblog.netflix.com/)
* [Paypal Developer Blog](https://medium.com/paypal-engineering)
* [Pinterest Engineering Blog](https://medium.com/@Pinterest_Engineering)
* [Reddit Blog](http://www.redditblog.com/)
* [Salesforce Engineering Blog](https://developer.salesforce.com/blogs/engineering/)
* [Slack Engineering Blog](https://slack.engineering/)
* [Spotify Labs](https://labs.spotify.com/)
* [Twilio Engineering Blog](http://www.twilio.com/engineering)
* [Twitter Engineering](https://blog.twitter.com/engineering/)
* [Uber Engineering Blog](http://eng.uber.com/)
* [Yahoo Engineering Blog](http://yahooeng.tumblr.com/)
* [Yelp Engineering Blog](http://engineeringblog.yelp.com/)
* [Zynga Engineering Blog](https://www.zynga.com/blogs/engineering)
#### Source(s) and further reading
Looking to add a blog? To avoid duplicating work, consider adding your company blog to the following repo:
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
* [kilimchoi/engineering-blogs](https://github.com/kilimchoi/engineering-blogs)
## Under development
Interested in adding a section or helping complete one in-progress? [Contribute](#contributing) !
Interested in adding a section or helping complete one in-progress? [Contribute](#contributing)!
* Distributed computing with MapReduce
* Consistent hashing
* Scatter gather
* [Contribute](#contributing)
* [Contribute](#contributing)
## Credits
@ -1811,28 +1811,28 @@ Credits and sources are provided throughout this repo.
Special thanks to:
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
* [Hired in tech](http://www.hiredintech.com/system-design/the-system-design-process/)
* [Cracking the coding interview](https://www.amazon.com/dp/0984782850/)
* [High scalability](http://highscalability.com/)
* [checkcheckzz/system-design-interview](https://github.com/checkcheckzz/system-design-interview)
* [shashank88/system_design](https://github.com/shashank88/system_design)
* [mmcgrana/services-engineering](https://github.com/mmcgrana/services-engineering)
* [System design cheat sheet](https://gist.github.com/vasanthk/485d1c25737e8e72759f)
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
Feel free to contact me to discuss any issues, questions, or comments.
My contact info can be found on my [GitHub page](https://github.com/donnemartin) .
My contact info can be found on my [GitHub page](https://github.com/donnemartin).
## License
*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook) .*
*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook).*
Copyright 2017 Donne Martin
Creative Commons Attribution 4.0 International License (CC BY 4.0)
Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/

View File

@ -4,7 +4,7 @@
## Contributing
See the [Contributing Guidelines](CONTRIBUTING.md) .
See the [Contributing Guidelines](CONTRIBUTING.md).
## Translation Statuses
@ -14,7 +14,7 @@ See the [Contributing Guidelines](CONTRIBUTING.md) .
**Within the past 2 months, there has been 1) No active work in the translation fork, and 2) No discussions from previous maintainer(s) in the discussion thread.*
Languages not listed here have not been started, [contribute](CONTRIBUTING.md) !
Languages not listed here have not been started, [contribute](CONTRIBUTING.md)!
Languages are grouped by status and are listed in alphabetical order.
@ -22,33 +22,33 @@ Languages are grouped by status and are listed in alphabetical order.
### 🎉 Japanese
* [README-ja.md](README-ja.md)
* Maintainer(s) : [@tsukukobaan](https://github.com/tsukukobaan) 👏
* [README-ja.md](README-ja.md)
* Maintainer(s): [@tsukukobaan](https://github.com/tsukukobaan) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/100
### 🎉 Simplified Chinese
* [zh-Hans.md](README-zh-Hans.md)
* Maintainer(s) : [@sqrthree](https://github.com/sqrthree) 👏
* [zh-Hans.md](README-zh-Hans.md)
* Maintainer(s): [@sqrthree](https://github.com/sqrthree) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/38
### 🎉 Traditional Chinese
* [README-zh-TW.md](README-zh-TW.md)
* Maintainer(s) : [@kevingo](https://github.com/kevingo) 👏
* [README-zh-TW.md](README-zh-TW.md)
* Maintainer(s): [@kevingo](https://github.com/kevingo) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/88
## In Progress
### ⏳ Korean
* Maintainer(s) : [@bonomoon](https://github.com/bonomoon) , [@mingrammer](https://github.com/mingrammer) 👏
* Maintainer(s): [@bonomoon](https://github.com/bonomoon), [@mingrammer](https://github.com/mingrammer) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/102
* Translation Fork: https://github.com/bonomoon/system-design-primer, https://github.com/donnemartin/system-design-primer/pull/103
### ⏳ Russian
* Maintainer(s) : [@voitau](https://github.com/voitau) , [@DmitryOlkhovoi](https://github.com/DmitryOlkhovoi) 👏
* Maintainer(s): [@voitau](https://github.com/voitau), [@DmitryOlkhovoi](https://github.com/DmitryOlkhovoi) 👏
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/87
* Translation Fork: https://github.com/voitau/system-design-primer/blob/master/README-ru.md
@ -58,106 +58,106 @@ Languages are grouped by status and are listed in alphabetical order.
* If you're able to commit to being an active maintainer for a language, let us know in the discussion thread for your language and update this file with a pull request.
* If you're listed here as a "Previous Maintainer" but can commit to being an active maintainer, also let us know.
* See the [Contributing Guidelines](CONTRIBUTING.md) .
* See the [Contributing Guidelines](CONTRIBUTING.md).
### ❗ Arabic
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@aymns](https://github.com/aymns)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@aymns](https://github.com/aymns)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/170
* Translation Fork: https://github.com/aymns/system-design-primer/blob/develop/README-ar.md
### ❗ Bengali
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@nutboltu](https://github.com/nutboltu)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@nutboltu](https://github.com/nutboltu)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/220
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/240
### ❗ Brazilian Portuguese
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@IuryAlves](https://github.com/IuryAlves)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@IuryAlves](https://github.com/IuryAlves)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/40
* Translation Fork: https://github.com/IuryAlves/system-design-primer, https://github.com/donnemartin/system-design-primer/pull/67
### ❗ French
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@spuyet](https://github.com/spuyet)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@spuyet](https://github.com/spuyet)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/250
* Translation Fork: https://github.com/spuyet/system-design-primer/blob/add-french-translation/README-fr.md
### ❗ German
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@Allaman](https://github.com/Allaman)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@Allaman](https://github.com/Allaman)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/186
* Translation Fork: None
### ❗ Greek
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@Belonias](https://github.com/Belonias)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@Belonias](https://github.com/Belonias)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/130
* Translation Fork: None
### ❗ Hebrew
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@EladLeev](https://github.com/EladLeev)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@EladLeev](https://github.com/EladLeev)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/272
* Translation Fork: https://github.com/EladLeev/system-design-primer/tree/he-translate
### ❗ Italian
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@pgoodjohn](https://github.com/pgoodjohn)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@pgoodjohn](https://github.com/pgoodjohn)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/104
* Translation Fork: https://github.com/pgoodjohn/system-design-primer
### ❗ Persian
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@hadisinaee](https://github.com/hadisinaee)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@hadisinaee](https://github.com/hadisinaee)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/pull/112
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/112
### ❗ Spanish
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@eamanu](https://github.com/eamanu)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@eamanu](https://github.com/eamanu)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/136
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/189
### ❗ Thai
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@iphayao](https://github.com/iphayao)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@iphayao](https://github.com/iphayao)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/187
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/221
### ❗ Turkish
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@hwclass](https://github.com/hwclass) , [@canerbaran](https://github.com/canerbaran) , [@emrahtoy](https://github.com/emrahtoy)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@hwclass](https://github.com/hwclass), [@canerbaran](https://github.com/canerbaran), [@emrahtoy](https://github.com/emrahtoy)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/39
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/239
### ❗ Ukrainian
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@Kietzmann](https://github.com/Kietzmann) , [@Acarus](https://github.com/Acarus)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@Kietzmann](https://github.com/Kietzmann), [@Acarus](https://github.com/Acarus)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/248
* Translation Fork: https://github.com/Acarus/system-design-primer
### ❗ Vietnamese
* Maintainer(s) : **Help Wanted**
* Previous Maintainer(s) : [@tranlyvu](https://github.com/tranlyvu) , [@duynguyenhoang](https://github.com/duynguyenhoang)
* Maintainer(s): **Help Wanted**
* Previous Maintainer(s): [@tranlyvu](https://github.com/tranlyvu), [@duynguyenhoang](https://github.com/duynguyenhoang)
* Discussion Thread: https://github.com/donnemartin/system-design-primer/issues/127
* Translation Fork: https://github.com/donnemartin/system-design-primer/pull/241, https://github.com/donnemartin/system-design-primer/pull/327
## Not Started
Languages not listed here have not been started, [contribute](CONTRIBUTING.md) !
Languages not listed here have not been started, [contribute](CONTRIBUTING.md)!

View File

@ -38,7 +38,7 @@ generate () {
check_dependencies () {
for dependency in "${dependencies[@]}"
do
if ! [ -x "$(command -v $dependency) " ]; then
if ! [ -x "$(command -v $dependency)" ]; then
echo "Error: $dependency is not installed." >&2
exit 1
fi

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -67,118 +67,118 @@
"from enum import Enum\n",
"\n",
"\n",
"class Rank(Enum) :\n",
"class Rank(Enum):\n",
"\n",
" OPERATOR = 0\n",
" SUPERVISOR = 1\n",
" DIRECTOR = 2\n",
"\n",
"\n",
"class Employee(metaclass=ABCMeta) :\n",
"class Employee(metaclass=ABCMeta):\n",
"\n",
" def __init__(self, employee_id, name, rank, call_center) :\n",
" def __init__(self, employee_id, name, rank, call_center):\n",
" self.employee_id = employee_id\n",
" self.name = name\n",
" self.rank = rank\n",
" self.call = None\n",
" self.call_center = call_center\n",
"\n",
" def take_call(self, call) :\n",
" def take_call(self, call):\n",
" \"\"\"Assume the employee will always successfully take the call.\"\"\"\n",
" self.call = call\n",
" self.call.employee = self\n",
" self.call.state = CallState.IN_PROGRESS\n",
"\n",
" def complete_call(self) :\n",
" def complete_call(self):\n",
" self.call.state = CallState.COMPLETE\n",
" self.call_center.notify_call_completed(self.call) \n",
" self.call_center.notify_call_completed(self.call)\n",
"\n",
" @abstractmethod\n",
" def escalate_call(self) :\n",
" def escalate_call(self):\n",
" pass\n",
"\n",
" def _escalate_call(self) :\n",
" def _escalate_call(self):\n",
" self.call.state = CallState.READY\n",
" call = self.call\n",
" self.call = None\n",
" self.call_center.notify_call_escalated(call) \n",
" self.call_center.notify_call_escalated(call)\n",
"\n",
"\n",
"class Operator(Employee) :\n",
"class Operator(Employee):\n",
"\n",
" def __init__(self, employee_id, name) :\n",
" super(Operator, self) .__init__(employee_id, name, Rank.OPERATOR) \n",
" def __init__(self, employee_id, name):\n",
" super(Operator, self).__init__(employee_id, name, Rank.OPERATOR)\n",
"\n",
" def escalate_call(self) :\n",
" def escalate_call(self):\n",
" self.call.level = Rank.SUPERVISOR\n",
" self._escalate_call() \n",
" self._escalate_call()\n",
"\n",
"\n",
"class Supervisor(Employee) :\n",
"class Supervisor(Employee):\n",
"\n",
" def __init__(self, employee_id, name) :\n",
" super(Operator, self) .__init__(employee_id, name, Rank.SUPERVISOR) \n",
" def __init__(self, employee_id, name):\n",
" super(Operator, self).__init__(employee_id, name, Rank.SUPERVISOR)\n",
"\n",
" def escalate_call(self) :\n",
" def escalate_call(self):\n",
" self.call.level = Rank.DIRECTOR\n",
" self._escalate_call() \n",
" self._escalate_call()\n",
"\n",
"\n",
"class Director(Employee) :\n",
"class Director(Employee):\n",
"\n",
" def __init__(self, employee_id, name) :\n",
" super(Operator, self) .__init__(employee_id, name, Rank.DIRECTOR) \n",
" def __init__(self, employee_id, name):\n",
" super(Operator, self).__init__(employee_id, name, Rank.DIRECTOR)\n",
"\n",
" def escalate_call(self) :\n",
" raise NotImplemented('Directors must be able to handle any call') \n",
" def escalate_call(self):\n",
" raise NotImplemented('Directors must be able to handle any call')\n",
"\n",
"\n",
"class CallState(Enum) :\n",
"class CallState(Enum):\n",
"\n",
" READY = 0\n",
" IN_PROGRESS = 1\n",
" COMPLETE = 2\n",
"\n",
"\n",
"class Call(object) :\n",
"class Call(object):\n",
"\n",
" def __init__(self, rank) :\n",
" def __init__(self, rank):\n",
" self.state = CallState.READY\n",
" self.rank = rank\n",
" self.employee = None\n",
"\n",
"\n",
"class CallCenter(object) :\n",
"class CallCenter(object):\n",
"\n",
" def __init__(self, operators, supervisors, directors) :\n",
" def __init__(self, operators, supervisors, directors):\n",
" self.operators = operators\n",
" self.supervisors = supervisors\n",
" self.directors = directors\n",
" self.queued_calls = deque() \n",
" self.queued_calls = deque()\n",
"\n",
" def dispatch_call(self, call) :\n",
" if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR) :\n",
" def dispatch_call(self, call):\n",
" if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR):\n",
" raise ValueError('Invalid call rank: {}'.format(call.rank))\n",
" employee = None\n",
" if call.rank == Rank.OPERATOR:\n",
" employee = self._dispatch_call(call, self.operators) \n",
" employee = self._dispatch_call(call, self.operators)\n",
" if call.rank == Rank.SUPERVISOR or employee is None:\n",
" employee = self._dispatch_call(call, self.supervisors) \n",
" employee = self._dispatch_call(call, self.supervisors)\n",
" if call.rank == Rank.DIRECTOR or employee is None:\n",
" employee = self._dispatch_call(call, self.directors) \n",
" employee = self._dispatch_call(call, self.directors)\n",
" if employee is None:\n",
" self.queued_calls.append(call) \n",
" self.queued_calls.append(call)\n",
"\n",
" def _dispatch_call(self, call, employees) :\n",
" def _dispatch_call(self, call, employees):\n",
" for employee in employees:\n",
" if employee.call is None:\n",
" employee.take_call(call) \n",
" employee.take_call(call)\n",
" return employee\n",
" return None\n",
"\n",
" def notify_call_escalated(self, call) : # ...\n",
" def notify_call_completed(self, call) : # ...\n",
" def dispatch_queued_call_to_newly_freed_employee(self, call, employee) : # ..."
" def notify_call_escalated(self, call): # ...\n",
" def notify_call_completed(self, call): # ...\n",
" def dispatch_queued_call_to_newly_freed_employee(self, call, employee): # ..."
]
}
],

View File

@ -3,120 +3,120 @@ from collections import deque
from enum import Enum
class Rank(Enum) :
class Rank(Enum):
OPERATOR = 0
SUPERVISOR = 1
DIRECTOR = 2
class Employee(metaclass=ABCMeta) :
class Employee(metaclass=ABCMeta):
def __init__(self, employee_id, name, rank, call_center) :
def __init__(self, employee_id, name, rank, call_center):
self.employee_id = employee_id
self.name = name
self.rank = rank
self.call = None
self.call_center = call_center
def take_call(self, call) :
def take_call(self, call):
"""Assume the employee will always successfully take the call."""
self.call = call
self.call.employee = self
self.call.state = CallState.IN_PROGRESS
def complete_call(self) :
def complete_call(self):
self.call.state = CallState.COMPLETE
self.call_center.notify_call_completed(self.call)
self.call_center.notify_call_completed(self.call)
@abstractmethod
def escalate_call(self) :
def escalate_call(self):
pass
def _escalate_call(self) :
def _escalate_call(self):
self.call.state = CallState.READY
call = self.call
self.call = None
self.call_center.notify_call_escalated(call)
self.call_center.notify_call_escalated(call)
class Operator(Employee) :
class Operator(Employee):
def __init__(self, employee_id, name) :
super(Operator, self) .__init__(employee_id, name, Rank.OPERATOR)
def __init__(self, employee_id, name):
super(Operator, self).__init__(employee_id, name, Rank.OPERATOR)
def escalate_call(self) :
def escalate_call(self):
self.call.level = Rank.SUPERVISOR
self._escalate_call()
self._escalate_call()
class Supervisor(Employee) :
class Supervisor(Employee):
def __init__(self, employee_id, name) :
super(Operator, self) .__init__(employee_id, name, Rank.SUPERVISOR)
def __init__(self, employee_id, name):
super(Operator, self).__init__(employee_id, name, Rank.SUPERVISOR)
def escalate_call(self) :
def escalate_call(self):
self.call.level = Rank.DIRECTOR
self._escalate_call()
self._escalate_call()
class Director(Employee) :
class Director(Employee):
def __init__(self, employee_id, name) :
super(Operator, self) .__init__(employee_id, name, Rank.DIRECTOR)
def __init__(self, employee_id, name):
super(Operator, self).__init__(employee_id, name, Rank.DIRECTOR)
def escalate_call(self) :
raise NotImplementedError('Directors must be able to handle any call')
def escalate_call(self):
raise NotImplementedError('Directors must be able to handle any call')
class CallState(Enum) :
class CallState(Enum):
READY = 0
IN_PROGRESS = 1
COMPLETE = 2
class Call(object) :
class Call(object):
def __init__(self, rank) :
def __init__(self, rank):
self.state = CallState.READY
self.rank = rank
self.employee = None
class CallCenter(object) :
class CallCenter(object):
def __init__(self, operators, supervisors, directors) :
def __init__(self, operators, supervisors, directors):
self.operators = operators
self.supervisors = supervisors
self.directors = directors
self.queued_calls = deque()
self.queued_calls = deque()
def dispatch_call(self, call) :
if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR) :
def dispatch_call(self, call):
if call.rank not in (Rank.OPERATOR, Rank.SUPERVISOR, Rank.DIRECTOR):
raise ValueError('Invalid call rank: {}'.format(call.rank))
employee = None
if call.rank == Rank.OPERATOR:
employee = self._dispatch_call(call, self.operators)
employee = self._dispatch_call(call, self.operators)
if call.rank == Rank.SUPERVISOR or employee is None:
employee = self._dispatch_call(call, self.supervisors)
employee = self._dispatch_call(call, self.supervisors)
if call.rank == Rank.DIRECTOR or employee is None:
employee = self._dispatch_call(call, self.directors)
employee = self._dispatch_call(call, self.directors)
if employee is None:
self.queued_calls.append(call)
self.queued_calls.append(call)
def _dispatch_call(self, call, employees) :
def _dispatch_call(self, call, employees):
for employee in employees:
if employee.call is None:
employee.take_call(call)
employee.take_call(call)
return employee
return None
def notify_call_escalated(self, call) :
def notify_call_escalated(self, call):
pass
def notify_call_completed(self, call) :
def notify_call_completed(self, call):
pass
def dispatch_queued_call_to_newly_freed_employee(self, call, employee) :
def dispatch_queued_call_to_newly_freed_employee(self, call, employee):
pass

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -57,7 +57,7 @@
"import sys\n",
"\n",
"\n",
"class Suit(Enum) :\n",
"class Suit(Enum):\n",
"\n",
" HEART = 0\n",
" DIAMOND = 1\n",
@ -65,100 +65,100 @@
" SPADE = 3\n",
"\n",
"\n",
"class Card(metaclass=ABCMeta) :\n",
"class Card(metaclass=ABCMeta):\n",
"\n",
" def __init__(self, value, suit) :\n",
" def __init__(self, value, suit):\n",
" self.value = value\n",
" self.suit = suit\n",
" self.is_available = True\n",
"\n",
" @property\n",
" @abstractmethod\n",
" def value(self) :\n",
" def value(self):\n",
" pass\n",
"\n",
" @value.setter\n",
" @abstractmethod\n",
" def value(self, other) :\n",
" def value(self, other):\n",
" pass\n",
"\n",
"\n",
"class BlackJackCard(Card) :\n",
"class BlackJackCard(Card):\n",
"\n",
" def __init__(self, value, suit) :\n",
" super(BlackJackCard, self) .__init__(value, suit) \n",
" def __init__(self, value, suit):\n",
" super(BlackJackCard, self).__init__(value, suit)\n",
"\n",
" def is_ace(self) :\n",
" def is_ace(self):\n",
" return self._value == 1\n",
"\n",
" def is_face_card(self) :\n",
" def is_face_card(self):\n",
" \"\"\"Jack = 11, Queen = 12, King = 13\"\"\"\n",
" return 10 < self._value <= 13\n",
"\n",
" @property\n",
" def value(self) :\n",
" def value(self):\n",
" if self.is_ace() == 1:\n",
" return 1\n",
" elif self.is_face_card() :\n",
" elif self.is_face_card():\n",
" return 10\n",
" else:\n",
" return self._value\n",
"\n",
" @value.setter\n",
" def value(self, new_value) :\n",
" def value(self, new_value):\n",
" if 1 <= new_value <= 13:\n",
" self._value = new_value\n",
" else:\n",
" raise ValueError('Invalid card value: {}'.format(new_value))\n",
"\n",
"\n",
"class Hand(object) :\n",
"class Hand(object):\n",
"\n",
" def __init__(self, cards) :\n",
" def __init__(self, cards):\n",
" self.cards = cards\n",
"\n",
" def add_card(self, card) :\n",
" self.cards.append(card) \n",
" def add_card(self, card):\n",
" self.cards.append(card)\n",
"\n",
" def score(self) :\n",
" def score(self):\n",
" total_value = 0\n",
" for card in self.cards:\n",
" total_value += card.value\n",
" return total_value\n",
"\n",
"\n",
"class BlackJackHand(Hand) :\n",
"class BlackJackHand(Hand):\n",
"\n",
" BLACKJACK = 21\n",
"\n",
" def __init__(self, cards) :\n",
" super(BlackJackHand, self) .__init__(cards) \n",
" def __init__(self, cards):\n",
" super(BlackJackHand, self).__init__(cards)\n",
"\n",
" def score(self) :\n",
" def score(self):\n",
" min_over = sys.MAXSIZE\n",
" max_under = -sys.MAXSIZE\n",
" for score in self.possible_scores() :\n",
" for score in self.possible_scores():\n",
" if self.BLACKJACK < score < min_over:\n",
" min_over = score\n",
" elif max_under < score <= self.BLACKJACK:\n",
" max_under = score\n",
" return max_under if max_under != -sys.MAXSIZE else min_over\n",
"\n",
" def possible_scores(self) :\n",
" def possible_scores(self):\n",
" \"\"\"Return a list of possible scores, taking Aces into account.\"\"\"\n",
" # ...\n",
"\n",
"\n",
"class Deck(object) :\n",
"class Deck(object):\n",
"\n",
" def __init__(self, cards) :\n",
" def __init__(self, cards):\n",
" self.cards = cards\n",
" self.deal_index = 0\n",
"\n",
" def remaining_cards(self) :\n",
" def remaining_cards(self):\n",
" return len(self.cards) - deal_index\n",
"\n",
" def deal_card() :\n",
" def deal_card():\n",
" try:\n",
" card = self.cards[self.deal_index]\n",
" card.is_available = False\n",
@ -167,7 +167,7 @@
" return None\n",
" return card\n",
"\n",
" def shuffle(self) : # ..."
" def shuffle(self): # ..."
]
}
],

View File

@ -3,7 +3,7 @@ from enum import Enum
import sys
class Suit(Enum) :
class Suit(Enum):
HEART = 0
DIAMOND = 1
@ -11,100 +11,100 @@ class Suit(Enum) :
SPADE = 3
class Card(metaclass=ABCMeta) :
class Card(metaclass=ABCMeta):
def __init__(self, value, suit) :
def __init__(self, value, suit):
self.value = value
self.suit = suit
self.is_available = True
@property
@abstractmethod
def value(self) :
def value(self):
pass
@value.setter
@abstractmethod
def value(self, other) :
def value(self, other):
pass
class BlackJackCard(Card) :
class BlackJackCard(Card):
def __init__(self, value, suit) :
super(BlackJackCard, self) .__init__(value, suit)
def __init__(self, value, suit):
super(BlackJackCard, self).__init__(value, suit)
def is_ace(self) :
def is_ace(self):
return True if self._value == 1 else False
def is_face_card(self) :
def is_face_card(self):
"""Jack = 11, Queen = 12, King = 13"""
return True if 10 < self._value <= 13 else False
@property
def value(self) :
def value(self):
if self.is_ace() == 1:
return 1
elif self.is_face_card() :
elif self.is_face_card():
return 10
else:
return self._value
@value.setter
def value(self, new_value) :
def value(self, new_value):
if 1 <= new_value <= 13:
self._value = new_value
else:
raise ValueError('Invalid card value: {}'.format(new_value))
class Hand(object) :
class Hand(object):
def __init__(self, cards) :
def __init__(self, cards):
self.cards = cards
def add_card(self, card) :
self.cards.append(card)
def add_card(self, card):
self.cards.append(card)
def score(self) :
def score(self):
total_value = 0
for card in self.cards:
total_value += card.value
return total_value
class BlackJackHand(Hand) :
class BlackJackHand(Hand):
BLACKJACK = 21
def __init__(self, cards) :
super(BlackJackHand, self) .__init__(cards)
def __init__(self, cards):
super(BlackJackHand, self).__init__(cards)
def score(self) :
def score(self):
min_over = sys.MAXSIZE
max_under = -sys.MAXSIZE
for score in self.possible_scores() :
for score in self.possible_scores():
if self.BLACKJACK < score < min_over:
min_over = score
elif max_under < score <= self.BLACKJACK:
max_under = score
return max_under if max_under != -sys.MAXSIZE else min_over
def possible_scores(self) :
def possible_scores(self):
"""Return a list of possible scores, taking Aces into account."""
pass
class Deck(object) :
class Deck(object):
def __init__(self, cards) :
def __init__(self, cards):
self.cards = cards
self.deal_index = 0
def remaining_cards(self) :
def remaining_cards(self):
return len(self.cards) - self.deal_index
def deal_card(self) :
def deal_card(self):
try:
card = self.cards[self.deal_index]
card.is_available = False
@ -113,5 +113,5 @@ class Deck(object) :
return None
return card
def shuffle(self) :
def shuffle(self):
pass

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -56,44 +56,44 @@
],
"source": [
"%%writefile hash_map.py\n",
"class Item(object) :\n",
"class Item(object):\n",
"\n",
" def __init__(self, key, value) :\n",
" def __init__(self, key, value):\n",
" self.key = key\n",
" self.value = value\n",
"\n",
"\n",
"class HashTable(object) :\n",
"class HashTable(object):\n",
"\n",
" def __init__(self, size) :\n",
" def __init__(self, size):\n",
" self.size = size\n",
" self.table = [[] for _ in range(self.size) ]\n",
" self.table = [[] for _ in range(self.size)]\n",
"\n",
" def _hash_function(self, key) :\n",
" def _hash_function(self, key):\n",
" return key % self.size\n",
"\n",
" def set(self, key, value) :\n",
" hash_index = self._hash_function(key) \n",
" def set(self, key, value):\n",
" hash_index = self._hash_function(key)\n",
" for item in self.table[hash_index]:\n",
" if item.key == key:\n",
" item.value = value\n",
" return\n",
" self.table[hash_index].append(Item(key, value))\n",
"\n",
" def get(self, key) :\n",
" hash_index = self._hash_function(key) \n",
" def get(self, key):\n",
" hash_index = self._hash_function(key)\n",
" for item in self.table[hash_index]:\n",
" if item.key == key:\n",
" return item.value\n",
" raise KeyError('Key not found') \n",
" raise KeyError('Key not found')\n",
"\n",
" def remove(self, key) :\n",
" hash_index = self._hash_function(key) \n",
" for index, item in enumerate(self.table[hash_index]) :\n",
" def remove(self, key):\n",
" hash_index = self._hash_function(key)\n",
" for index, item in enumerate(self.table[hash_index]):\n",
" if item.key == key:\n",
" del self.table[hash_index][index]\n",
" return\n",
" raise KeyError('Key not found') "
" raise KeyError('Key not found')"
]
}
],

View File

@ -1,38 +1,38 @@
class Item(object) :
class Item(object):
def __init__(self, key, value) :
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object) :
class HashTable(object):
def __init__(self, size) :
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size) ]
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key) :
def _hash_function(self, key):
return key % self.size
def set(self, key, value) :
hash_index = self._hash_function(key)
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key) :
hash_index = self._hash_function(key)
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
raise KeyError('Key not found')
def remove(self, key) :
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]) :
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
raise KeyError('Key not found')

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -52,67 +52,67 @@
],
"source": [
"%%writefile lru_cache.py\n",
"class Node(object) :\n",
"class Node(object):\n",
"\n",
" def __init__(self, results) :\n",
" def __init__(self, results):\n",
" self.results = results\n",
" self.prev = None\n",
" self.next = None\n",
"\n",
"\n",
"class LinkedList(object) :\n",
"class LinkedList(object):\n",
"\n",
" def __init__(self) :\n",
" def __init__(self):\n",
" self.head = None\n",
" self.tail = None\n",
"\n",
" def move_to_front(self, node) : # ...\n",
" def append_to_front(self, node) : # ...\n",
" def remove_from_tail(self) : # ...\n",
" def move_to_front(self, node): # ...\n",
" def append_to_front(self, node): # ...\n",
" def remove_from_tail(self): # ...\n",
"\n",
"\n",
"class Cache(object) :\n",
"class Cache(object):\n",
"\n",
" def __init__(self, MAX_SIZE) :\n",
" def __init__(self, MAX_SIZE):\n",
" self.MAX_SIZE = MAX_SIZE\n",
" self.size = 0\n",
" self.lookup = {} # key: query, value: node\n",
" self.linked_list = LinkedList() \n",
" self.linked_list = LinkedList()\n",
"\n",
" def get(self, query) \n",
" def get(self, query)\n",
" \"\"\"Get the stored query result from the cache.\n",
" \n",
" Accessing a node updates its position to the front of the LRU list.\n",
" \"\"\"\n",
" node = self.lookup.get(query) \n",
" node = self.lookup.get(query)\n",
" if node is None:\n",
" return None\n",
" self.linked_list.move_to_front(node) \n",
" self.linked_list.move_to_front(node)\n",
" return node.results\n",
"\n",
" def set(self, results, query) :\n",
" def set(self, results, query):\n",
" \"\"\"Set the result for the given query key in the cache.\n",
" \n",
" When updating an entry, updates its position to the front of the LRU list.\n",
" If the entry is new and the cache is at capacity, removes the oldest entry\n",
" before the new entry is added.\n",
" \"\"\"\n",
" node = self.lookup.get(query) \n",
" node = self.lookup.get(query)\n",
" if node is not None:\n",
" # Key exists in cache, update the value\n",
" node.results = results\n",
" self.linked_list.move_to_front(node) \n",
" self.linked_list.move_to_front(node)\n",
" else:\n",
" # Key does not exist in cache\n",
" if self.size == self.MAX_SIZE:\n",
" # Remove the oldest entry from the linked list and lookup\n",
" self.lookup.pop(self.linked_list.tail.query, None) \n",
" self.linked_list.remove_from_tail() \n",
" self.lookup.pop(self.linked_list.tail.query, None)\n",
" self.linked_list.remove_from_tail()\n",
" else:\n",
" self.size += 1\n",
" # Add the new key and value\n",
" new_node = Node(results) \n",
" self.linked_list.append_to_front(new_node) \n",
" new_node = Node(results)\n",
" self.linked_list.append_to_front(new_node)\n",
" self.lookup[query] = new_node"
]
}

View File

@ -1,66 +1,66 @@
class Node(object) :
class Node(object):
def __init__(self, results) :
def __init__(self, results):
self.results = results
self.next = next
class LinkedList(object) :
class LinkedList(object):
def __init__(self) :
def __init__(self):
self.head = None
self.tail = None
def move_to_front(self, node) :
def move_to_front(self, node):
pass
def append_to_front(self, node) :
def append_to_front(self, node):
pass
def remove_from_tail(self) :
def remove_from_tail(self):
pass
class Cache(object) :
class Cache(object):
def __init__(self, MAX_SIZE) :
def __init__(self, MAX_SIZE):
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
self.linked_list = LinkedList()
self.linked_list = LinkedList()
def get(self, query) :
def get(self, query):
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
"""
node = self.lookup.get(query)
node = self.lookup.get(query)
if node is None:
return None
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
return node.results
def set(self, results, query) :
def set(self, results, query):
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
If the entry is new and the cache is at capacity, removes the oldest entry
before the new entry is added.
"""
node = self.lookup.get(query)
node = self.lookup.get(query)
if node is not None:
# Key exists in cache, update the value
node.results = results
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
new_node = Node(results)
self.linked_list.append_to_front(new_node)
new_node = Node(results)
self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -67,21 +67,21 @@
"from abc import ABCMeta\n",
"\n",
"\n",
"class UserService(object) :\n",
"class UserService(object):\n",
"\n",
" def __init__(self) :\n",
" def __init__(self):\n",
" self.users_by_id = {} # key: user id, value: User\n",
"\n",
" def add_user(self, user_id, name, pass_hash) : # ...\n",
" def remove_user(self, user_id) : # ...\n",
" def add_friend_request(self, from_user_id, to_user_id) : # ...\n",
" def approve_friend_request(self, from_user_id, to_user_id) : # ...\n",
" def reject_friend_request(self, from_user_id, to_user_id) : # ...\n",
" def add_user(self, user_id, name, pass_hash): # ...\n",
" def remove_user(self, user_id): # ...\n",
" def add_friend_request(self, from_user_id, to_user_id): # ...\n",
" def approve_friend_request(self, from_user_id, to_user_id): # ...\n",
" def reject_friend_request(self, from_user_id, to_user_id): # ...\n",
"\n",
"\n",
"class User(object) :\n",
"class User(object):\n",
"\n",
" def __init__(self, user_id, name, pass_hash) :\n",
" def __init__(self, user_id, name, pass_hash):\n",
" self.user_id = user_id\n",
" self.name = name\n",
" self.pass_hash = pass_hash\n",
@ -91,54 +91,54 @@
" self.received_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest\n",
" self.sent_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest\n",
"\n",
" def message_user(self, friend_id, message) : # ...\n",
" def message_group(self, group_id, message) : # ...\n",
" def send_friend_request(self, friend_id) : # ...\n",
" def receive_friend_request(self, friend_id) : # ...\n",
" def approve_friend_request(self, friend_id) : # ...\n",
" def reject_friend_request(self, friend_id) : # ...\n",
" def message_user(self, friend_id, message): # ...\n",
" def message_group(self, group_id, message): # ...\n",
" def send_friend_request(self, friend_id): # ...\n",
" def receive_friend_request(self, friend_id): # ...\n",
" def approve_friend_request(self, friend_id): # ...\n",
" def reject_friend_request(self, friend_id): # ...\n",
"\n",
"\n",
"class Chat(metaclass=ABCMeta) :\n",
"class Chat(metaclass=ABCMeta):\n",
"\n",
" def __init__(self, chat_id) :\n",
" def __init__(self, chat_id):\n",
" self.chat_id = chat_id\n",
" self.users = []\n",
" self.messages = []\n",
"\n",
"\n",
"class PrivateChat(Chat) :\n",
"class PrivateChat(Chat):\n",
"\n",
" def __init__(self, first_user, second_user) :\n",
" super(PrivateChat, self) .__init__() \n",
" self.users.append(first_user) \n",
" self.users.append(second_user) \n",
" def __init__(self, first_user, second_user):\n",
" super(PrivateChat, self).__init__()\n",
" self.users.append(first_user)\n",
" self.users.append(second_user)\n",
"\n",
"\n",
"class GroupChat(Chat) :\n",
"class GroupChat(Chat):\n",
"\n",
" def add_user(self, user) : # ...\n",
" def remove_user(self, user) : # ... \n",
" def add_user(self, user): # ...\n",
" def remove_user(self, user): # ... \n",
"\n",
"\n",
"class Message(object) :\n",
"class Message(object):\n",
"\n",
" def __init__(self, message_id, message, timestamp) :\n",
" def __init__(self, message_id, message, timestamp):\n",
" self.message_id = message_id\n",
" self.message = message\n",
" self.timestamp = timestamp\n",
"\n",
"\n",
"class AddRequest(object) :\n",
"class AddRequest(object):\n",
"\n",
" def __init__(self, from_user_id, to_user_id, request_status, timestamp) :\n",
" def __init__(self, from_user_id, to_user_id, request_status, timestamp):\n",
" self.from_user_id = from_user_id\n",
" self.to_user_id = to_user_id\n",
" self.request_status = request_status\n",
" self.timestamp = timestamp\n",
"\n",
"\n",
"class RequestStatus(Enum) :\n",
"class RequestStatus(Enum):\n",
"\n",
" UNREAD = 0\n",
" READ = 1\n",

View File

@ -2,30 +2,30 @@ from abc import ABCMeta
from enum import Enum
class UserService(object) :
class UserService(object):
def __init__(self) :
def __init__(self):
self.users_by_id = {} # key: user id, value: User
def add_user(self, user_id, name, pass_hash) :
def add_user(self, user_id, name, pass_hash):
pass
def remove_user(self, user_id) :
def remove_user(self, user_id):
pass
def add_friend_request(self, from_user_id, to_user_id) :
def add_friend_request(self, from_user_id, to_user_id):
pass
def approve_friend_request(self, from_user_id, to_user_id) :
def approve_friend_request(self, from_user_id, to_user_id):
pass
def reject_friend_request(self, from_user_id, to_user_id) :
def reject_friend_request(self, from_user_id, to_user_id):
pass
class User(object) :
class User(object):
def __init__(self, user_id, name, pass_hash) :
def __init__(self, user_id, name, pass_hash):
self.user_id = user_id
self.name = name
self.pass_hash = pass_hash
@ -35,68 +35,68 @@ class User(object) :
self.received_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
self.sent_friend_requests_by_friend_id = {} # key: friend id, value: AddRequest
def message_user(self, friend_id, message) :
def message_user(self, friend_id, message):
pass
def message_group(self, group_id, message) :
def message_group(self, group_id, message):
pass
def send_friend_request(self, friend_id) :
def send_friend_request(self, friend_id):
pass
def receive_friend_request(self, friend_id) :
def receive_friend_request(self, friend_id):
pass
def approve_friend_request(self, friend_id) :
def approve_friend_request(self, friend_id):
pass
def reject_friend_request(self, friend_id) :
def reject_friend_request(self, friend_id):
pass
class Chat(metaclass=ABCMeta) :
class Chat(metaclass=ABCMeta):
def __init__(self, chat_id) :
def __init__(self, chat_id):
self.chat_id = chat_id
self.users = []
self.messages = []
class PrivateChat(Chat) :
class PrivateChat(Chat):
def __init__(self, first_user, second_user) :
super(PrivateChat, self) .__init__()
self.users.append(first_user)
self.users.append(second_user)
def __init__(self, first_user, second_user):
super(PrivateChat, self).__init__()
self.users.append(first_user)
self.users.append(second_user)
class GroupChat(Chat) :
class GroupChat(Chat):
def add_user(self, user) :
def add_user(self, user):
pass
def remove_user(self, user) :
def remove_user(self, user):
pass
class Message(object) :
class Message(object):
def __init__(self, message_id, message, timestamp) :
def __init__(self, message_id, message, timestamp):
self.message_id = message_id
self.message = message
self.timestamp = timestamp
class AddRequest(object) :
class AddRequest(object):
def __init__(self, from_user_id, to_user_id, request_status, timestamp) :
def __init__(self, from_user_id, to_user_id, request_status, timestamp):
self.from_user_id = from_user_id
self.to_user_id = to_user_id
self.request_status = request_status
self.timestamp = timestamp
class RequestStatus(Enum) :
class RequestStatus(Enum):
UNREAD = 0
READ = 1

View File

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin) . Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer) ."
"This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer)."
]
},
{
@ -59,107 +59,107 @@
"from abc import ABCMeta, abstractmethod\n",
"\n",
"\n",
"class VehicleSize(Enum) :\n",
"class VehicleSize(Enum):\n",
"\n",
" MOTORCYCLE = 0\n",
" COMPACT = 1\n",
" LARGE = 2\n",
"\n",
"\n",
"class Vehicle(metaclass=ABCMeta) :\n",
"class Vehicle(metaclass=ABCMeta):\n",
"\n",
" def __init__(self, vehicle_size, license_plate, spot_size) :\n",
" def __init__(self, vehicle_size, license_plate, spot_size):\n",
" self.vehicle_size = vehicle_size\n",
" self.license_plate = license_plate\n",
" self.spot_size = spot_size\n",
" self.spots_taken = []\n",
"\n",
" def clear_spots(self) :\n",
" def clear_spots(self):\n",
" for spot in self.spots_taken:\n",
" spot.remove_vehicle(self) \n",
" spot.remove_vehicle(self)\n",
" self.spots_taken = []\n",
"\n",
" def take_spot(self, spot) :\n",
" self.spots_taken.append(spot) \n",
" def take_spot(self, spot):\n",
" self.spots_taken.append(spot)\n",
"\n",
" @abstractmethod\n",
" def can_fit_in_spot(self, spot) :\n",
" def can_fit_in_spot(self, spot):\n",
" pass\n",
"\n",
"\n",
"class Motorcycle(Vehicle) :\n",
"class Motorcycle(Vehicle):\n",
"\n",
" def __init__(self, license_plate) :\n",
" super(Motorcycle, self) .__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1) \n",
" def __init__(self, license_plate):\n",
" super(Motorcycle, self).__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)\n",
"\n",
" def can_fit_in_spot(self, spot) :\n",
" def can_fit_in_spot(self, spot):\n",
" return True\n",
"\n",
"\n",
"class Car(Vehicle) :\n",
"class Car(Vehicle):\n",
"\n",
" def __init__(self, license_plate) :\n",
" super(Car, self) .__init__(VehicleSize.COMPACT, license_plate, spot_size=1) \n",
" def __init__(self, license_plate):\n",
" super(Car, self).__init__(VehicleSize.COMPACT, license_plate, spot_size=1)\n",
"\n",
" def can_fit_in_spot(self, spot) :\n",
" def can_fit_in_spot(self, spot):\n",
" return True if (spot.size == LARGE or spot.size == COMPACT) else False\n",
"\n",
"\n",
"class Bus(Vehicle) :\n",
"class Bus(Vehicle):\n",
"\n",
" def __init__(self, license_plate) :\n",
" super(Bus, self) .__init__(VehicleSize.LARGE, license_plate, spot_size=5) \n",
" def __init__(self, license_plate):\n",
" super(Bus, self).__init__(VehicleSize.LARGE, license_plate, spot_size=5)\n",
"\n",
" def can_fit_in_spot(self, spot) :\n",
" def can_fit_in_spot(self, spot):\n",
" return True if spot.size == LARGE else False\n",
"\n",
"\n",
"class ParkingLot(object) :\n",
"class ParkingLot(object):\n",
"\n",
" def __init__(self, num_levels) :\n",
" def __init__(self, num_levels):\n",
" self.num_levels = num_levels\n",
" self.levels = []\n",
"\n",
" def park_vehicle(self, vehicle) :\n",
" def park_vehicle(self, vehicle):\n",
" for level in levels:\n",
" if level.park_vehicle(vehicle) :\n",
" if level.park_vehicle(vehicle):\n",
" return True\n",
" return False\n",
"\n",
"\n",
"class Level(object) :\n",
"class Level(object):\n",
"\n",
" SPOTS_PER_ROW = 10\n",
"\n",
" def __init__(self, floor, total_spots) :\n",
" def __init__(self, floor, total_spots):\n",
" self.floor = floor\n",
" self.num_spots = total_spots\n",
" self.available_spots = 0\n",
" self.parking_spots = []\n",
"\n",
" def spot_freed(self) :\n",
" def spot_freed(self):\n",
" self.available_spots += 1\n",
"\n",
" def park_vehicle(self, vehicle) :\n",
" spot = self._find_available_spot(vehicle) \n",
" def park_vehicle(self, vehicle):\n",
" spot = self._find_available_spot(vehicle)\n",
" if spot is None:\n",
" return None\n",
" else:\n",
" spot.park_vehicle(vehicle) \n",
" spot.park_vehicle(vehicle)\n",
" return spot\n",
"\n",
" def _find_available_spot(self, vehicle) :\n",
" def _find_available_spot(self, vehicle):\n",
" \"\"\"Find an available spot where vehicle can fit, or return None\"\"\"\n",
" # ...\n",
"\n",
" def _park_starting_at_spot(self, spot, vehicle) :\n",
" def _park_starting_at_spot(self, spot, vehicle):\n",
" \"\"\"Occupy starting at spot.spot_number to vehicle.spot_size.\"\"\"\n",
" # ...\n",
"\n",
"\n",
"class ParkingSpot(object) :\n",
"class ParkingSpot(object):\n",
"\n",
" def __init__(self, level, row, spot_number, spot_size, vehicle_size) :\n",
" def __init__(self, level, row, spot_number, spot_size, vehicle_size):\n",
" self.level = level\n",
" self.row = row\n",
" self.spot_number = spot_number\n",
@ -167,16 +167,16 @@
" self.vehicle_size = vehicle_size\n",
" self.vehicle = None\n",
"\n",
" def is_available(self) :\n",
" def is_available(self):\n",
" return True if self.vehicle is None else False\n",
"\n",
" def can_fit_vehicle(self, vehicle) :\n",
" def can_fit_vehicle(self, vehicle):\n",
" if self.vehicle is not None:\n",
" return False\n",
" return vehicle.can_fit_in_spot(self) \n",
" return vehicle.can_fit_in_spot(self)\n",
"\n",
" def park_vehicle(self, vehicle) : # ...\n",
" def remove_vehicle(self) : # ..."
" def park_vehicle(self, vehicle): # ...\n",
" def remove_vehicle(self): # ..."
]
}
],

View File

@ -2,107 +2,107 @@ from abc import ABCMeta, abstractmethod
from enum import Enum
class VehicleSize(Enum) :
class VehicleSize(Enum):
MOTORCYCLE = 0
COMPACT = 1
LARGE = 2
class Vehicle(metaclass=ABCMeta) :
class Vehicle(metaclass=ABCMeta):
def __init__(self, vehicle_size, license_plate, spot_size) :
def __init__(self, vehicle_size, license_plate, spot_size):
self.vehicle_size = vehicle_size
self.license_plate = license_plate
self.spot_size
self.spots_taken = []
def clear_spots(self) :
def clear_spots(self):
for spot in self.spots_taken:
spot.remove_vehicle(self)
spot.remove_vehicle(self)
self.spots_taken = []
def take_spot(self, spot) :
self.spots_taken.append(spot)
def take_spot(self, spot):
self.spots_taken.append(spot)
@abstractmethod
def can_fit_in_spot(self, spot) :
def can_fit_in_spot(self, spot):
pass
class Motorcycle(Vehicle) :
class Motorcycle(Vehicle):
def __init__(self, license_plate) :
super(Motorcycle, self) .__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)
def __init__(self, license_plate):
super(Motorcycle, self).__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1)
def can_fit_in_spot(self, spot) :
def can_fit_in_spot(self, spot):
return True
class Car(Vehicle) :
class Car(Vehicle):
def __init__(self, license_plate) :
super(Car, self) .__init__(VehicleSize.COMPACT, license_plate, spot_size=1)
def __init__(self, license_plate):
super(Car, self).__init__(VehicleSize.COMPACT, license_plate, spot_size=1)
def can_fit_in_spot(self, spot) :
return spot.size in (VehicleSize.LARGE, VehicleSize.COMPACT)
def can_fit_in_spot(self, spot):
return spot.size in (VehicleSize.LARGE, VehicleSize.COMPACT)
class Bus(Vehicle) :
class Bus(Vehicle):
def __init__(self, license_plate) :
super(Bus, self) .__init__(VehicleSize.LARGE, license_plate, spot_size=5)
def __init__(self, license_plate):
super(Bus, self).__init__(VehicleSize.LARGE, license_plate, spot_size=5)
def can_fit_in_spot(self, spot) :
def can_fit_in_spot(self, spot):
return spot.size == VehicleSize.LARGE
class ParkingLot(object) :
class ParkingLot(object):
def __init__(self, num_levels) :
def __init__(self, num_levels):
self.num_levels = num_levels
self.levels = [] # List of Levels
def park_vehicle(self, vehicle) :
def park_vehicle(self, vehicle):
for level in self.levels:
if level.park_vehicle(vehicle) :
if level.park_vehicle(vehicle):
return True
return False
class Level(object) :
class Level(object):
SPOTS_PER_ROW = 10
def __init__(self, floor, total_spots) :
def __init__(self, floor, total_spots):
self.floor = floor
self.num_spots = total_spots
self.available_spots = 0
self.spots = [] # List of ParkingSpots
def spot_freed(self) :
def spot_freed(self):
self.available_spots += 1
def park_vehicle(self, vehicle) :
spot = self._find_available_spot(vehicle)
def park_vehicle(self, vehicle):
spot = self._find_available_spot(vehicle)
if spot is None:
return None
else:
spot.park_vehicle(vehicle)
spot.park_vehicle(vehicle)
return spot
def _find_available_spot(self, vehicle) :
def _find_available_spot(self, vehicle):
"""Find an available spot where vehicle can fit, or return None"""
pass
def _park_starting_at_spot(self, spot, vehicle) :
def _park_starting_at_spot(self, spot, vehicle):
"""Occupy starting at spot.spot_number to vehicle.spot_size."""
pass
class ParkingSpot(object) :
class ParkingSpot(object):
def __init__(self, level, row, spot_number, spot_size, vehicle_size) :
def __init__(self, level, row, spot_number, spot_size, vehicle_size):
self.level = level
self.row = row
self.spot_number = spot_number
@ -110,16 +110,16 @@ class ParkingSpot(object) :
self.vehicle_size = vehicle_size
self.vehicle = None
def is_available(self) :
def is_available(self):
return True if self.vehicle is None else False
def can_fit_vehicle(self, vehicle) :
def can_fit_vehicle(self, vehicle):
if self.vehicle is not None:
return False
return vehicle.can_fit_in_spot(self)
return vehicle.can_fit_in_spot(self)
def park_vehicle(self, vehicle) :
def park_vehicle(self, vehicle):
pass
def remove_vehicle(self) :
def remove_vehicle(self):
pass

View File

@ -1,6 +1,6 @@
# 设计 Mint.com
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题索引) 中的有关部分,以避免重复的内容。您可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题索引)中的有关部分,以避免重复的内容。您可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@ -80,7 +80,7 @@
> 列出所有重要组件以规划概要设计。
![Imgur](http://i.imgur.com/E8klrBh.png)
![Imgur](http://i.imgur.com/E8klrBh.png)
## 第三步:设计核心组件
@ -88,9 +88,9 @@
### 用例:用户连接到一个财务账户
我们可以将 1000 万用户的信息存储在一个[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 中。我们应该讨论一下[选择SQL或NoSQL之间的用例和权衡](https://github.com/donnemartin/system-design-primer#sql-or-nosql) 了。
我们可以将 1000 万用户的信息存储在一个[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)中。我们应该讨论一下[选择SQL或NoSQL之间的用例和权衡](https://github.com/donnemartin/system-design-primer#sql-or-nosql)了。
* **客户端** 作为一个[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) ,发送请求到 **Web 服务器**
* **客户端** 作为一个[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server),发送请求到 **Web 服务器**
* **Web 服务器** 转发请求到 **账户API** 服务器
* **账户API** 服务器将新输入的账户信息更新到 **SQL数据库** 的`accounts`表
@ -106,13 +106,13 @@ account_url varchar(255) NOT NULL
account_login varchar(32) NOT NULL
account_password_hash char(64) NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
我们将在`id``user_id`和`created_at`等字段上创建一个[索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 以加速查找(对数时间而不是扫描整个表)并保持数据在内存中。从内存中顺序读取 1 MB数据花费大约250毫秒而从SSD读取是其4倍从磁盘读取是其80倍。<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
我们将在`id``user_id`和`created_at`等字段上创建一个[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)以加速查找(对数时间而不是扫描整个表)并保持数据在内存中。从内存中顺序读取 1 MB数据花费大约250毫秒而从SSD读取是其4倍从磁盘读取是其80倍。<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
我们将使用公开的[**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
我们将使用公开的[**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
@ -120,7 +120,7 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
https://mint.com/api/v1/account
```
对于内部通信,我们可以使用[远程过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
对于内部通信,我们可以使用[远程过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)。
接下来,服务从账户中提取交易。
@ -136,8 +136,8 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
* **客户端**向 **Web服务器** 发送请求
* **Web服务器** 将请求转发到 **帐户API** 服务器
* **帐户API** 服务器将job放在 **队列** 中,如 [Amazon SQS](https://aws.amazon.com/sqs/) 或者 [RabbitMQ](https://www.rabbitmq.com/)
* 提取交易可能需要一段时间,我们可能希望[与队列异步](https://github.com/donnemartin/system-design-primer#asynchronism) 地来做,虽然这会引入额外的复杂度。
* **帐户API** 服务器将job放在 **队列** 中,如 [Amazon SQS](https://aws.amazon.com/sqs/) 或者 [RabbitMQ](https://www.rabbitmq.com/)
* 提取交易可能需要一段时间,我们可能希望[与队列异步](https://github.com/donnemartin/system-design-primer#asynchronism)地来做,虽然这会引入额外的复杂度。
* **交易提取服务** 执行如下操作:
* 从 **Queue** 中拉取并从金融机构中提取给定用户的交易,将结果作为原始日志文件存储在 **对象存储区**。
* 使用 **分类服务** 来分类每个交易
@ -156,25 +156,25 @@ created_at datetime NOT NULL
seller varchar(32) NOT NULL
amount decimal NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
我们将在 `id``user_id`,和 `created_at`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)
我们将在 `id``user_id`,和 `created_at`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)。
`monthly_spending`表应该具有如下结构:
```
id int NOT NULL AUTO_INCREMENT
month_year date NOT NULL
category varchar(32)
category varchar(32)
amount decimal NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
我们将在`id``user_id`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)
我们将在`id``user_id`字段上创建[索引](https://github.com/donnemartin/system-design-primer#use-good-indices)。
#### 分类服务
@ -183,7 +183,7 @@ FOREIGN KEY(user_id) REFERENCES users(id)
**告知你的面试官你准备写多少代码**。
```python
class DefaultCategories(Enum) :
class DefaultCategories(Enum):
HOUSING = 0
FOOD = 1
@ -200,19 +200,19 @@ seller_category_map['Target'] = DefaultCategories.SHOPPING
对于一开始没有在映射中的卖家,我们可以通过评估用户提供的手动类别来进行众包。在 O(1) 时间内,我们可以用堆来快速查找每个卖家的顶端的手动覆盖。
```python
class Categorizer(object) :
class Categorizer(object):
def __init__(self, seller_category_map, self.seller_category_crowd_overrides_map) :
def __init__(self, seller_category_map, self.seller_category_crowd_overrides_map):
self.seller_category_map = seller_category_map
self.seller_category_crowd_overrides_map = \
seller_category_crowd_overrides_map
def categorize(self, transaction) :
def categorize(self, transaction):
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
elif transaction.seller in self.seller_category_crowd_overrides_map:
self.seller_category_map[transaction.seller] = \
self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
```
@ -220,9 +220,9 @@ class Categorizer(object) :
交易实现:
```python
class Transaction(object) :
class Transaction(object):
def __init__(self, created_at, seller, amount) :
def __init__(self, created_at, seller, amount):
self.timestamp = timestamp
self.seller = seller
self.amount = amount
@ -234,13 +234,13 @@ class Transaction(object) :
`TABLE budget_overrides`中存储此覆盖。
```python
class Budget(object) :
class Budget(object):
def __init__(self, income) :
def __init__(self, income):
self.income = income
self.categories_to_budget_map = self.create_budget_template()
self.categories_to_budget_map = self.create_budget_template()
def create_budget_template(self) :
def create_budget_template(self):
return {
'DefaultCategories.HOUSING': income * .4,
'DefaultCategories.FOOD': income * .2
@ -249,7 +249,7 @@ class Budget(object) :
...
}
def override_category_budget(self, category, amount) :
def override_category_budget(self, category, amount):
self.categories_to_budget_map[category] = amount
```
@ -275,26 +275,26 @@ user_id timestamp seller amount
**MapReduce** 实现:
```python
class SpendingByCategory(MRJob) :
class SpendingByCategory(MRJob):
def __init__(self, categorizer) :
def __init__(self, categorizer):
self.categorizer = categorizer
self.current_year_month = calc_current_year_month()
self.current_year_month = calc_current_year_month()
...
def calc_current_year_month(self) :
def calc_current_year_month(self):
"""返回当前年月"""
...
def extract_year_month(self, timestamp) :
def extract_year_month(self, timestamp):
"""返回时间戳的年,月部分"""
...
def handle_budget_notifications(self, key, total) :
def handle_budget_notifications(self, key, total):
"""如果接近或超出预算调用通知API"""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""解析每个日志行,提取和转换相关行。
参数行应为如下形式:
@ -303,31 +303,31 @@ class SpendingByCategory(MRJob) :
使用分类器来将卖家转换成类别生成如下形式的key-value对
(user_id, 2016-01, shopping) , 25
(user_id, 2016-01, shopping) , 100
(user_id, 2016-01, gas) , 50
(user_id, 2016-01, shopping), 25
(user_id, 2016-01, shopping), 100
(user_id, 2016-01, gas), 50
"""
user_id, timestamp, seller, amount = line.split('\t')
category = self.categorizer.categorize(seller)
period = self.extract_year_month(timestamp)
user_id, timestamp, seller, amount = line.split('\t')
category = self.categorizer.categorize(seller)
period = self.extract_year_month(timestamp)
if period == self.current_year_month:
yield (user_id, period, category) , amount
yield (user_id, period, category), amount
def reducer(self, key, value) :
def reducer(self, key, value):
"""将每个key对应的值求和。
(user_id, 2016-01, shopping) , 125
(user_id, 2016-01, gas) , 50
(user_id, 2016-01, shopping), 125
(user_id, 2016-01, gas), 50
"""
total = sum(values)
yield key, sum(values)
total = sum(values)
yield key, sum(values)
```
## 第四步:设计扩展
> 根据限制条件,找到并解决瓶颈。
![Imgur](http://i.imgur.com/V5q57vU.png)
![Imgur](http://i.imgur.com/V5q57vU.png)
**重要提示:不要从最初设计直接跳到最终设计中!**
@ -337,20 +337,20 @@ class SpendingByCategory(MRJob) :
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [异步](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#异步)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [异步](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#异步)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
我们将增加一个额外的用例:**用户** 访问摘要和交易数据。
@ -366,7 +366,7 @@ class SpendingByCategory(MRJob) :
* 如果URL在 **SQL 数据库**中,获取该内容
* 以其内容更新 **内存缓存**
参考 [何时更新缓存](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) 中权衡和替代的内容。以上方法描述了 [cache-aside缓存模式](https://github.com/donnemartin/system-design-primer#cache-aside) .
参考 [何时更新缓存](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) 中权衡和替代的内容。以上方法描述了 [cache-aside缓存模式](https://github.com/donnemartin/system-design-primer#cache-aside).
我们可以使用诸如 Amazon Redshift 或者 Google BigQuery 等数据仓库解决方案,而不是将`monthly_spending`聚合表保留在 **SQL 数据库** 中。
@ -376,10 +376,10 @@ class SpendingByCategory(MRJob) :
*平均* 200 次交易写入每秒(峰值时更高)对于单个 **SQL 写入主-从服务** 来说可能是棘手的。我们可能需要考虑其它的 SQL 性能拓展技术:
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**
@ -389,50 +389,50 @@ class SpendingByCategory(MRJob) :
#### NoSQL
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
### 延迟数值
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续探讨

View File

@ -80,7 +80,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/E8klrBh.png)
![Imgur](http://i.imgur.com/E8klrBh.png)
## Step 3: Design core components
@ -88,9 +88,9 @@ Handy conversion guide:
### Use case: User connects to a financial account
We could store info on the 10 million users in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
We could store info on the 10 million users in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Accounts API** server
* The **Accounts API** server updates the **SQL Database** `accounts` table with the newly entered account info
@ -106,13 +106,13 @@ account_url varchar(255) NOT NULL
account_login varchar(32) NOT NULL
account_password_hash char(64) NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id`, `user_id `, and `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
@ -120,7 +120,7 @@ $ curl -X POST --data '{ "user_id": "foo", "account_url": "bar", \
https://mint.com/api/v1/account
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
Next, the service extracts transactions from the account.
@ -136,8 +136,8 @@ Data flow:
* The **Client** sends a request to the **Web Server**
* The **Web Server** forwards the request to the **Accounts API** server
* The **Accounts API** server places a job on a **Queue** such as [Amazon SQS](https://aws.amazon.com/sqs/) or [RabbitMQ](https://www.rabbitmq.com/)
* Extracting transactions could take awhile, we'd probably want to do this [asynchronously with a queue](https://github.com/donnemartin/system-design-primer#asynchronism) , although this introduces additional complexity
* The **Accounts API** server places a job on a **Queue** such as [Amazon SQS](https://aws.amazon.com/sqs/) or [RabbitMQ](https://www.rabbitmq.com/)
* Extracting transactions could take awhile, we'd probably want to do this [asynchronously with a queue](https://github.com/donnemartin/system-design-primer#asynchronism), although this introduces additional complexity
* The **Transaction Extraction Service** does the following:
* Pulls from the **Queue** and extracts transactions for the given account from the financial institution, storing the results as raw log files in the **Object Store**
* Uses the **Category Service** to categorize each transaction
@ -156,8 +156,8 @@ created_at datetime NOT NULL
seller varchar(32) NOT NULL
amount decimal NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id`, `user_id `, and `created_at`.
@ -167,11 +167,11 @@ The `monthly_spending` table could have the following structure:
```
id int NOT NULL AUTO_INCREMENT
month_year date NOT NULL
category varchar(32)
category varchar(32)
amount decimal NOT NULL
user_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
PRIMARY KEY(id)
FOREIGN KEY(user_id) REFERENCES users(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id` and `user_id `.
@ -183,7 +183,7 @@ For the **Category Service**, we can seed a seller-to-category dictionary with t
**Clarify with your interviewer how much code you are expected to write**.
```python
class DefaultCategories(Enum) :
class DefaultCategories(Enum):
HOUSING = 0
FOOD = 1
@ -200,19 +200,19 @@ seller_category_map['Target'] = DefaultCategories.SHOPPING
For sellers not initially seeded in the map, we could use a crowdsourcing effort by evaluating the manual category overrides our users provide. We could use a heap to quickly lookup the top manual override per seller in O(1) time.
```python
class Categorizer(object) :
class Categorizer(object):
def __init__(self, seller_category_map, seller_category_crowd_overrides_map) :
def __init__(self, seller_category_map, seller_category_crowd_overrides_map):
self.seller_category_map = seller_category_map
self.seller_category_crowd_overrides_map = \
seller_category_crowd_overrides_map
def categorize(self, transaction) :
def categorize(self, transaction):
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
elif transaction.seller in self.seller_category_crowd_overrides_map:
self.seller_category_map[transaction.seller] = \
self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
self.seller_category_crowd_overrides_map[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
```
@ -220,9 +220,9 @@ class Categorizer(object) :
Transaction implementation:
```python
class Transaction(object) :
class Transaction(object):
def __init__(self, created_at, seller, amount) :
def __init__(self, created_at, seller, amount):
self.created_at = created_at
self.seller = seller
self.amount = amount
@ -233,13 +233,13 @@ class Transaction(object) :
To start, we could use a generic budget template that allocates category amounts based on income tiers. Using this approach, we would not have to store the 100 million budget items identified in the constraints, only those that the user overrides. If a user overrides a budget category, which we could store the override in the `TABLE budget_overrides`.
```python
class Budget(object) :
class Budget(object):
def __init__(self, income) :
def __init__(self, income):
self.income = income
self.categories_to_budget_map = self.create_budget_template()
self.categories_to_budget_map = self.create_budget_template()
def create_budget_template(self) :
def create_budget_template(self):
return {
DefaultCategories.HOUSING: self.income * .4,
DefaultCategories.FOOD: self.income * .2,
@ -248,7 +248,7 @@ class Budget(object) :
...
}
def override_category_budget(self, category, amount) :
def override_category_budget(self, category, amount):
self.categories_to_budget_map[category] = amount
```
@ -274,26 +274,26 @@ user_id timestamp seller amount
**MapReduce** implementation:
```python
class SpendingByCategory(MRJob) :
class SpendingByCategory(MRJob):
def __init__(self, categorizer) :
def __init__(self, categorizer):
self.categorizer = categorizer
self.current_year_month = calc_current_year_month()
self.current_year_month = calc_current_year_month()
...
def calc_current_year_month(self) :
def calc_current_year_month(self):
"""Return the current year and month."""
...
def extract_year_month(self, timestamp) :
def extract_year_month(self, timestamp):
"""Return the year and month portions of the timestamp."""
...
def handle_budget_notifications(self, key, total) :
def handle_budget_notifications(self, key, total):
"""Call notification API if nearing or exceeded budget."""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Argument line will be of the form:
@ -303,31 +303,31 @@ class SpendingByCategory(MRJob) :
Using the categorizer to convert seller to category,
emit key value pairs of the form:
(user_id, 2016-01, shopping) , 25
(user_id, 2016-01, shopping) , 100
(user_id, 2016-01, gas) , 50
(user_id, 2016-01, shopping), 25
(user_id, 2016-01, shopping), 100
(user_id, 2016-01, gas), 50
"""
user_id, timestamp, seller, amount = line.split('\t')
category = self.categorizer.categorize(seller)
period = self.extract_year_month(timestamp)
user_id, timestamp, seller, amount = line.split('\t')
category = self.categorizer.categorize(seller)
period = self.extract_year_month(timestamp)
if period == self.current_year_month:
yield (user_id, period, category) , amount
yield (user_id, period, category), amount
def reducer(self, key, value) :
def reducer(self, key, value):
"""Sum values for each key.
(user_id, 2016-01, shopping) , 125
(user_id, 2016-01, gas) , 50
(user_id, 2016-01, shopping), 125
(user_id, 2016-01, gas), 50
"""
total = sum(values)
yield key, sum(values)
total = sum(values)
yield key, sum(values)
```
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/V5q57vU.png)
![Imgur](http://i.imgur.com/V5q57vU.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -339,19 +339,19 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Asynchronism](https://github.com/donnemartin/system-design-primer#asynchronism)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
We'll add an additional use case: **User** accesses summaries and transactions.
@ -367,20 +367,20 @@ User sessions, aggregate stats by category, and recent transactions could be pla
* If the url is in the **SQL Database**, fetches the contents
* Updates the **Memory Cache** with the contents
Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside) .
Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside).
Instead of keeping the `monthly_spending` aggregate table in the **SQL Database**, we could create a separate **Analytics Database** using a data warehousing solution such as Amazon Redshift or Google BigQuery.
We might only want to store a month of `transactions` data in the database, while storing the rest in a data warehouse or in an **Object Store**. An **Object Store** such as Amazon S3 can comfortably handle the constraint of 250 GB of new content per month.
To address the 200 *average* read requests per second (higher at peak) , traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
To address the 200 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
2,000 *average* transaction writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**. We might need to employ additional SQL scaling patterns:
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@ -390,50 +390,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -3,55 +3,55 @@
from mrjob.job import MRJob
class SpendingByCategory(MRJob) :
class SpendingByCategory(MRJob):
def __init__(self, categorizer) :
def __init__(self, categorizer):
self.categorizer = categorizer
...
def current_year_month(self) :
def current_year_month(self):
"""Return the current year and month."""
...
def extract_year_month(self, timestamp) :
def extract_year_month(self, timestamp):
"""Return the year and month portions of the timestamp."""
...
def handle_budget_notifications(self, key, total) :
def handle_budget_notifications(self, key, total):
"""Call notification API if nearing or exceeded budget."""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(2016-01, shopping) , 25
(2016-01, shopping) , 100
(2016-01, gas) , 50
(2016-01, shopping), 25
(2016-01, shopping), 100
(2016-01, gas), 50
"""
timestamp, category, amount = line.split('\t')
period = self. extract_year_month(timestamp)
if period == self.current_year_month() :
yield (period, category) , amount
timestamp, category, amount = line.split('\t')
period = self. extract_year_month(timestamp)
if period == self.current_year_month():
yield (period, category), amount
def reducer(self, key, values) :
def reducer(self, key, values):
"""Sum values for each key.
(2016-01, shopping) , 125
(2016-01, gas) , 50
(2016-01, shopping), 125
(2016-01, gas), 50
"""
total = sum(values)
self.handle_budget_notifications(key, total)
yield key, sum(values)
total = sum(values)
self.handle_budget_notifications(key, total)
yield key, sum(values)
def steps(self) :
def steps(self):
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer)
reducer=self.reducer)
]
if __name__ == '__main__':
SpendingByCategory.run()
SpendingByCategory.run()

View File

@ -3,7 +3,7 @@
from enum import Enum
class DefaultCategories(Enum) :
class DefaultCategories(Enum):
HOUSING = 0
FOOD = 1
@ -17,34 +17,34 @@ seller_category_map['Exxon'] = DefaultCategories.GAS
seller_category_map['Target'] = DefaultCategories.SHOPPING
class Categorizer(object) :
class Categorizer(object):
def __init__(self, seller_category_map, seller_category_overrides_map) :
def __init__(self, seller_category_map, seller_category_overrides_map):
self.seller_category_map = seller_category_map
self.seller_category_overrides_map = seller_category_overrides_map
def categorize(self, transaction) :
def categorize(self, transaction):
if transaction.seller in self.seller_category_map:
return self.seller_category_map[transaction.seller]
if transaction.seller in self.seller_category_overrides_map:
seller_category_map[transaction.seller] = \
self.manual_overrides[transaction.seller].peek_min()
self.manual_overrides[transaction.seller].peek_min()
return self.seller_category_map[transaction.seller]
return None
class Transaction(object) :
class Transaction(object):
def __init__(self, timestamp, seller, amount) :
def __init__(self, timestamp, seller, amount):
self.timestamp = timestamp
self.seller = seller
self.amount = amount
class Budget(object) :
class Budget(object):
def __init__(self, template_categories_to_budget_map) :
def __init__(self, template_categories_to_budget_map):
self.categories_to_budget_map = template_categories_to_budget_map
def override_category_budget(self, category, amount) :
def override_category_budget(self, category, amount):
self.categories_to_budget_map[category] = amount

View File

@ -1,6 +1,6 @@
# 设计 Pastebin.com (或者 Bit.ly)
# 设计 Pastebin.com (或者 Bit.ly)
**注意: 为了避免重复,当前文档会直接链接到[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 的相关区域,请参考链接内容以获得综合的讨论点、权衡和替代方案。**
**注意: 为了避免重复,当前文档会直接链接到[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)的相关区域,请参考链接内容以获得综合的讨论点、权衡和替代方案。**
**设计 Bit.ly** - 是一个类似的问题,区别是 pastebin 需要存储的是 paste 的内容,而不是原始的未短化的 url。
@ -61,7 +61,7 @@
* `paste_path` - 255 bytes
* 总共 = ~1.27 KB
* 每个月新的 paste 内容在 12.7GB
* (1.27 * 10000000) KB / 月的 paste
* (1.27 * 10000000)KB / 月的 paste
* 三年内将近 450GB 的新 paste 内容
* 三年内 3.6 亿短链接
* 假设大部分都是新的 paste而不是需要更新已存在的 paste
@ -79,7 +79,7 @@
> 概述一个包括所有重要的组件的高层次设计
![Imgur](http://i.imgur.com/BKsBnmG.png)
![Imgur](http://i.imgur.com/BKsBnmG.png)
## 第三步:设计核心组件
@ -87,13 +87,13 @@
### 用例:用户输入一段文本,然后得到一个随机生成的链接
我们可以用一个 [关系型数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms) 作为一个大的哈希表,用来把生成的 url 映射到一个包含 paste 文件的文件服务器和路径上。
我们可以用一个 [关系型数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)作为一个大的哈希表,用来把生成的 url 映射到一个包含 paste 文件的文件服务器和路径上。
为了避免托管一个文件服务器,我们可以用一个托管的**对象存储**,比如 Amazon 的 S3 或者[NoSQL 文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
为了避免托管一个文件服务器,我们可以用一个托管的**对象存储**,比如 Amazon 的 S3 或者[NoSQL 文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)。
作为一个大的哈希表的关系型数据库的替代方案,我们可以用[NoSQL 键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储) 。我们需要讨论[选择 SQL 或 NoSQL 之间的权衡](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql) 。下面的讨论是使用关系型数据库方法。
作为一个大的哈希表的关系型数据库的替代方案,我们可以用[NoSQL 键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)。我们需要讨论[选择 SQL 或 NoSQL 之间的权衡](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。下面的讨论是使用关系型数据库方法。
* **客户端** 发送一个创建 paste 的请求到作为一个[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) 启动的 **Web 服务器**
* **客户端** 发送一个创建 paste 的请求到作为一个[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)启动的 **Web 服务器**
* **Web 服务器** 转发请求给 **写接口** 服务器
* **写接口** 服务器执行如下操作:
* 生成一个唯一的 url
@ -113,10 +113,10 @@ shortlink char(7) NOT NULL
expiration_length_in_minutes int NOT NULL
created_at datetime NOT NULL
paste_path varchar(255) NOT NULL
PRIMARY KEY(shortlink)
PRIMARY KEY(shortlink)
```
我们将在 `shortlink` 字段和 `created_at` 字段上创建一个[数据库索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#使用正确的索引) ,用来提高查询的速度(避免因为扫描全表导致的长时间查询)并将数据保存在内存中,从内存里面顺序读取 1MB 的数据需要大概 250 微秒,而从 SSD 上读取则需要花费 4 倍的时间,从硬盘上则需要花费 80 倍的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数 > 1</a></sup>
我们将在 `shortlink` 字段和 `created_at` 字段上创建一个[数据库索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#使用正确的索引),用来提高查询的速度(避免因为扫描全表导致的长时间查询)并将数据保存在内存中,从内存里面顺序读取 1MB 的数据需要大概 250 微秒,而从 SSD 上读取则需要花费 4 倍的时间,从硬盘上则需要花费 80 倍的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数 > 1</a></sup>
为了生成唯一的 url我们可以
@ -128,15 +128,15 @@ PRIMARY KEY(shortlink)
* 对于 urls使用 Base 62 编码 `[a-zA-Z0-9]` 是比较合适的
* 对于每一个原始输入只会有一个 hash 结果Base 62 是确定的(不涉及随机性)
* Base 64 是另外一个流行的编码方案,但是对于 urls会因为额外的 `+``-` 字符串而产生一些问题
* 以下 [Base 62 伪代码](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) 执行的时间复杂度是 O(k) k 是数字的数量 = 7
* 以下 [Base 62 伪代码](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) 执行的时间复杂度是 O(k)k 是数字的数量 = 7
```python
def base_encode(num, base=62) :
def base_encode(num, base=62):
digits = []
while num > 0
remainder = modulo(num, base)
digits.push(remainder)
num = divide(num, base)
remainder = modulo(num, base)
digits.push(remainder)
num = divide(num, base)
digits = digits.reverse
```
@ -146,7 +146,7 @@ def base_encode(num, base=62) :
url = base_encode(md5(ip_address+timestamp))[:URL_LENGTH]
```
我们将会用一个公开的 [**REST 风格接口**](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
我们将会用一个公开的 [**REST 风格接口**](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
```shell
$ curl -X POST --data '{"expiration_length_in_minutes":"60", \"paste_contents":"Hello World!"}' https://pastebin.com/api/v1/paste
@ -160,7 +160,7 @@ Response:
}
```
用于内部通信,我们可以用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
用于内部通信,我们可以用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
### 用例:用户输入一个 paste 的 url 后可以看到它存储的内容
@ -192,36 +192,36 @@ Response:
因为实时分析不是必须的,所以我们可以简单的 **MapReduce** **Web Server** 的日志,用来生成点击次数。
```python
class HitCounts(MRJob) :
class HitCounts(MRJob):
def extract_url(self, line) :
def extract_url(self, line):
"""Extract the generated url from the log line."""
...
def extract_year_month(self, line) :
def extract_year_month(self, line):
"""Return the year and month portions of the timestamp."""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(2016-01, url0) , 1
(2016-01, url0) , 1
(2016-01, url1) , 1
(2016-01, url0), 1
(2016-01, url0), 1
(2016-01, url1), 1
"""
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url) , 1
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url), 1
def reducer(self, key, values) :
def reducer(self, key, values):
"""Sum values for each key.
(2016-01, url0) , 2
(2016-01, url1) , 1
(2016-01, url0), 2
(2016-01, url1), 1
"""
yield key, sum(values)
yield key, sum(values)
```
### 用例: 服务删除过期的 pastes
@ -233,43 +233,43 @@ class HitCounts(MRJob) :
> 给定约束条件,识别和解决瓶颈。
![Imgur](http://i.imgur.com/4edXG0T.png)
![Imgur](http://i.imgur.com/4edXG0T.png)
**重要提示: 不要简单的从最初的设计直接跳到最终的设计**
说明您将迭代地执行这样的操作1) **Benchmark/Load 测试**2) **Profile** 出瓶颈3) 在评估替代方案和权衡时解决瓶颈4) 重复前面,可以参考[在 AWS 上设计一个可以支持百万用户的系统](../scaling_aws/README.md) 这个用来解决如何迭代地扩展初始设计的例子。
说明您将迭代地执行这样的操作1)**Benchmark/Load 测试**2)**Profile** 出瓶颈3)在评估替代方案和权衡时解决瓶颈4)重复前面,可以参考[在 AWS 上设计一个可以支持百万用户的系统](../scaling_aws/README.md)这个用来解决如何迭代地扩展初始设计的例子。
重要的是讨论在初始设计中可能遇到的瓶颈,以及如何解决每个瓶颈。比如,在多个 **Web 服务器** 上添加 **负载平衡器** 可以解决哪些问题? **CDN** 解决哪些问题?**Master-Slave Replicas** 解决哪些问题? 替代方案是什么和怎么对每一个替代方案进行权衡比较?
我们将介绍一些组件来完成设计,并解决可伸缩性问题。内部的负载平衡器并不能减少杂乱。
**为了避免重复的讨论** 参考以下[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 获取主要讨论要点、权衡和替代方案:
**为了避免重复的讨论** 参考以下[系统设计主题](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)获取主要讨论要点、权衡和替代方案:
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [CDN](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#内容分发网络cdn)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [应用层](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [CDN](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#内容分发网络cdn)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [应用层](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**分析存储数据库** 可以用比如 Amazon Redshift 或者 Google BigQuery 这样的数据仓库解决方案。
一个像 Amazon S3 这样的 **对象存储**,可以轻松处理每月 12.7 GB 的新内容约束。
要处理 *平均* 每秒 40 读请求(峰值更高) ,其中热点内容的流量应该由 **内存缓存** 处理,而不是数据库。**内存缓存** 对于处理分布不均匀的流量和流量峰值也很有用。只要副本没有陷入复制写的泥潭,**SQL Read Replicas** 应该能够处理缓存丢失。
要处理 *平均* 每秒 40 读请求(峰值更高),其中热点内容的流量应该由 **内存缓存** 处理,而不是数据库。**内存缓存** 对于处理分布不均匀的流量和流量峰值也很有用。只要副本没有陷入复制写的泥潭,**SQL Read Replicas** 应该能够处理缓存丢失。
对于单个 **SQL Write Master-Slave***平均* 每秒 4paste 写入 (峰值更高) 应该是可以做到的。否则,我们需要使用额外的 SQL 扩展模式:
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#SQL调优)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#SQL调优)
我们还应该考虑将一些数据移动到 **NoSQL 数据库**
@ -279,50 +279,50 @@ class HitCounts(MRJob) :
### NoSQL
* [键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [sql 还是 nosql](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [sql 还是 nosql](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 缓存什么
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步和微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 讨论权衡:
* 跟客户端之间的外部通信 - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 跟客户端之间的外部通信 - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全
参考[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)
参考[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)。
### 延迟数字
见[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
见[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续进行

View File

@ -1,4 +1,4 @@
# Design Pastebin.com (or Bit.ly)
# Design Pastebin.com (or Bit.ly)
*Note: This document links directly to relevant areas found in the [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) to avoid duplication. Refer to the linked content for general talking points, tradeoffs, and alternatives.*
@ -79,7 +79,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/BKsBnmG.png)
![Imgur](http://i.imgur.com/BKsBnmG.png)
## Step 3: Design core components
@ -89,17 +89,17 @@ Handy conversion guide:
We could use a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) as a large hash table, mapping the generated url to a file server and path containing the paste file.
Instead of managing a file server, we could use a managed **Object Store** such as Amazon S3 or a [NoSQL document store](https://github.com/donnemartin/system-design-primer#document-store) .
Instead of managing a file server, we could use a managed **Object Store** such as Amazon S3 or a [NoSQL document store](https://github.com/donnemartin/system-design-primer#document-store).
An alternative to a relational database acting as a large hash table, we could use a [NoSQL key-value store](https://github.com/donnemartin/system-design-primer#key-value-store) . We should discuss the [tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) . The following discussion uses the relational database approach.
An alternative to a relational database acting as a large hash table, we could use a [NoSQL key-value store](https://github.com/donnemartin/system-design-primer#key-value-store). We should discuss the [tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql). The following discussion uses the relational database approach.
* The **Client** sends a create paste request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a create paste request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Write API** server
* The **Write API** server does the following:
* Generates a unique url
* Checks if the url is unique by looking at the **SQL Database** for a duplicate
* If the url is not unique, it generates another url
* If we supported a custom url, we could use the user-supplied (also check for a duplicate)
* If we supported a custom url, we could use the user-supplied (also check for a duplicate)
* Saves to the **SQL Database** `pastes` table
* Saves the paste data to the **Object Store**
* Returns the url
@ -113,7 +113,7 @@ shortlink char(7) NOT NULL
expiration_length_in_minutes int NOT NULL
created_at datetime NOT NULL
paste_path varchar(255) NOT NULL
PRIMARY KEY(shortlink)
PRIMARY KEY(shortlink)
```
Setting the primary key to be based on the `shortlink` column creates an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) that the database uses to enforce uniqueness. We'll create an additional index on `created_at` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
@ -126,17 +126,17 @@ To generate the unique url, we could:
* Alternatively, we could also take the MD5 hash of randomly-generated data
* [**Base 62**](https://www.kerstner.at/2012/07/shortening-strings-using-base-62-encoding/) encode the MD5 hash
* Base 62 encodes to `[a-zA-Z0-9]` which works well for urls, eliminating the need for escaping special characters
* There is only one hash result for the original input and Base 62 is deterministic (no randomness involved)
* There is only one hash result for the original input and Base 62 is deterministic (no randomness involved)
* Base 64 is another popular encoding but provides issues for urls because of the additional `+` and `/` characters
* The following [Base 62 pseudocode](http://stackoverflow.com/questions/742013/how-to-code-a-url-shortener) runs in O(k) time where k is the number of digits = 7:
```python
def base_encode(num, base=62) :
def base_encode(num, base=62):
digits = []
while num > 0
remainder = modulo(num, base)
digits.push(remainder)
num = divide(num, base)
remainder = modulo(num, base)
digits.push(remainder)
num = divide(num, base)
digits = digits.reverse
```
@ -146,7 +146,7 @@ def base_encode(num, base=62) :
url = base_encode(md5(ip_address+timestamp))[:URL_LENGTH]
```
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl -X POST --data '{ "expiration_length_in_minutes": "60", \
@ -161,7 +161,7 @@ Response:
}
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
### Use case: User enters a paste's url and views the contents
@ -195,36 +195,36 @@ Since realtime analytics are not a requirement, we could simply **MapReduce** th
**Clarify with your interviewer how much code you are expected to write**.
```python
class HitCounts(MRJob) :
class HitCounts(MRJob):
def extract_url(self, line) :
def extract_url(self, line):
"""Extract the generated url from the log line."""
...
def extract_year_month(self, line) :
def extract_year_month(self, line):
"""Return the year and month portions of the timestamp."""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(2016-01, url0) , 1
(2016-01, url0) , 1
(2016-01, url1) , 1
(2016-01, url0), 1
(2016-01, url0), 1
(2016-01, url1), 1
"""
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url) , 1
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url), 1
def reducer(self, key, values) :
def reducer(self, key, values):
"""Sum values for each key.
(2016-01, url0) , 2
(2016-01, url1) , 1
(2016-01, url0), 2
(2016-01, url1), 1
"""
yield key, sum(values)
yield key, sum(values)
```
### Use case: Service deletes expired pastes
@ -235,7 +235,7 @@ To delete expired pastes, we could just scan the **SQL Database** for all entrie
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/4edXG0T.png)
![Imgur](http://i.imgur.com/4edXG0T.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -247,31 +247,31 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Analytics Database** could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
An **Object Store** such as Amazon S3 can comfortably handle the constraint of 12.7 GB of new content per month.
To address the 40 *average* read requests per second (higher at peak) , traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
To address the 40 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
4 *average* paste writes per second (with higher at peak) should be do-able for a single **SQL Write Master-Slave**. Otherwise, we'll need to employ additional SQL scaling patterns:
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@ -281,50 +281,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -3,44 +3,44 @@
from mrjob.job import MRJob
class HitCounts(MRJob) :
class HitCounts(MRJob):
def extract_url(self, line) :
def extract_url(self, line):
"""Extract the generated url from the log line."""
pass
def extract_year_month(self, line) :
def extract_year_month(self, line):
"""Return the year and month portions of the timestamp."""
pass
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(2016-01, url0) , 1
(2016-01, url0) , 1
(2016-01, url1) , 1
(2016-01, url0), 1
(2016-01, url0), 1
(2016-01, url1), 1
"""
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url) , 1
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url), 1
def reducer(self, key, values) :
def reducer(self, key, values):
"""Sum values for each key.
(2016-01, url0) , 2
(2016-01, url1) , 1
(2016-01, url0), 2
(2016-01, url1), 1
"""
yield key, sum(values)
yield key, sum(values)
def steps(self) :
def steps(self):
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer)
reducer=self.reducer)
]
if __name__ == '__main__':
HitCounts.run()
HitCounts.run()

View File

@ -1,6 +1,6 @@
# 设计一个键-值缓存来存储最近 web 服务查询的结果
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@ -58,7 +58,7 @@
> 列出所有重要组件以规划概要设计。
![Imgur](http://i.imgur.com/KqZ3dSx.png)
![Imgur](http://i.imgur.com/KqZ3dSx.png)
## 第三步:设计核心组件
@ -70,7 +70,7 @@
由于缓存容量有限,我们将使用 LRU近期最少使用算法来控制缓存的过期。
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) **Web 服务器**发送一个请求
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
* 这个 **Web 服务器**将请求转发给**查询 API** 服务
* **查询 API** 服务将会做这些事情:
* 分析查询
@ -98,33 +98,33 @@
实现**查询 API 服务**
```python
class QueryApi(object) :
class QueryApi(object):
def __init__(self, memory_cache, reverse_index_service) :
def __init__(self, memory_cache, reverse_index_service):
self.memory_cache = memory_cache
self.reverse_index_service = reverse_index_service
def parse_query(self, query) :
def parse_query(self, query):
"""移除多余内容,将文本分割成词组,修复拼写错误,
规范化字母大小写,转换布尔运算。
"""
...
def process_query(self, query) :
query = self.parse_query(query)
results = self.memory_cache.get(query)
def process_query(self, query):
query = self.parse_query(query)
results = self.memory_cache.get(query)
if results is None:
results = self.reverse_index_service.process_search(query)
self.memory_cache.set(query, results)
results = self.reverse_index_service.process_search(query)
self.memory_cache.set(query, results)
return results
```
实现**节点**
```python
class Node(object) :
class Node(object):
def __init__(self, query, results) :
def __init__(self, query, results):
self.query = query
self.results = results
```
@ -132,34 +132,34 @@ class Node(object) :
实现**链表**
```python
class LinkedList(object) :
class LinkedList(object):
def __init__(self) :
def __init__(self):
self.head = None
self.tail = None
def move_to_front(self, node) :
def move_to_front(self, node):
...
def append_to_front(self, node) :
def append_to_front(self, node):
...
def remove_from_tail(self) :
def remove_from_tail(self):
...
```
实现**缓存**
```python
class Cache(object) :
class Cache(object):
def __init__(self, MAX_SIZE) :
def __init__(self, MAX_SIZE):
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
self.linked_list = LinkedList()
self.linked_list = LinkedList()
def get(self, query)
def get(self, query)
"""从缓存取得存储的内容
将入口节点位置更新为 LRU 链表的头部。
@ -167,10 +167,10 @@ class Cache(object) :
node = self.lookup[query]
if node is None:
return None
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
return node.results
def set(self, results, query) :
def set(self, results, query):
"""将所给查询键的结果存在缓存中。
当更新缓存记录的时候,将它的位置指向 LRU 链表的头部。
@ -181,18 +181,18 @@ class Cache(object) :
if node is not None:
# 键存在于缓存中,更新它对应的值
node.results = results
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
else:
# 键不存在于缓存中
if self.size == self.MAX_SIZE:
# 在链表中查找并删除最老的记录
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
else:
self.size += 1
# 添加新的键值对
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
```
@ -206,13 +206,13 @@ class Cache(object) :
解决这些问题的最直接的方法就是为缓存记录设置一个它在被更新前能留在缓存中的最长时间这个时间简称为存活时间TTL
参考 [「何时更新缓存」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#何时更新缓存) 来了解其权衡取舍及替代方案。以上方法在[缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式) 一章中详细地进行了描述。
参考 [「何时更新缓存」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#何时更新缓存)来了解其权衡取舍及替代方案。以上方法在[缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)一章中详细地进行了描述。
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
![Imgur](http://i.imgur.com/4j99mhe.png)
![Imgur](http://i.imgur.com/4j99mhe.png)
**重要提示:不要从最初设计直接跳到最终设计中!**
@ -222,16 +222,16 @@ class Cache(object) :
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
### 将内存缓存扩大到多台机器
@ -239,7 +239,7 @@ class Cache(object) :
* **缓存集群中的每一台机器都有自己的缓存** - 简单,但是它会降低缓存命中率。
* **缓存集群中的每一台机器都有缓存的拷贝** - 简单,但是它的内存使用效率太低了。
* **对缓存进行[分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片) ,分别部署在缓存集群中的所有机器中** - 更加复杂,但是它是最佳的选择。我们可以使用哈希,用查询语句 `machine = hash(query) ` 来确定哪台机器有需要缓存。当然我们也可以使用[一致性哈希](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#正在完善中)
* **对缓存进行[分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片),分别部署在缓存集群中的所有机器中** - 更加复杂,但是它是最佳的选择。我们可以使用哈希,用查询语句 `machine = hash(query)` 来确定哪台机器有需要缓存。当然我们也可以使用[一致性哈希](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#正在完善中)。
## 其它要点
@ -247,58 +247,58 @@ class Cache(object) :
### SQL 缩放模式
* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
#### NoSQL
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
### 延迟数值
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续探讨

View File

@ -58,7 +58,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/KqZ3dSx.png)
![Imgur](http://i.imgur.com/KqZ3dSx.png)
## Step 3: Design core components
@ -70,7 +70,7 @@ Popular queries can be served from a **Memory Cache** such as Redis or Memcached
Since the cache has limited capacity, we'll use a least recently used (LRU) approach to expire older entries.
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Query API** server
* The **Query API** server does the following:
* Parses the query
@ -98,33 +98,33 @@ The cache can use a doubly-linked list: new items will be added to the head whil
**Query API Server** implementation:
```python
class QueryApi(object) :
class QueryApi(object):
def __init__(self, memory_cache, reverse_index_service) :
def __init__(self, memory_cache, reverse_index_service):
self.memory_cache = memory_cache
self.reverse_index_service = reverse_index_service
def parse_query(self, query) :
def parse_query(self, query):
"""Remove markup, break text into terms, deal with typos,
normalize capitalization, convert to use boolean operations.
"""
...
def process_query(self, query) :
query = self.parse_query(query)
results = self.memory_cache.get(query)
def process_query(self, query):
query = self.parse_query(query)
results = self.memory_cache.get(query)
if results is None:
results = self.reverse_index_service.process_search(query)
self.memory_cache.set(query, results)
results = self.reverse_index_service.process_search(query)
self.memory_cache.set(query, results)
return results
```
**Node** implementation:
```python
class Node(object) :
class Node(object):
def __init__(self, query, results) :
def __init__(self, query, results):
self.query = query
self.results = results
```
@ -132,34 +132,34 @@ class Node(object) :
**LinkedList** implementation:
```python
class LinkedList(object) :
class LinkedList(object):
def __init__(self) :
def __init__(self):
self.head = None
self.tail = None
def move_to_front(self, node) :
def move_to_front(self, node):
...
def append_to_front(self, node) :
def append_to_front(self, node):
...
def remove_from_tail(self) :
def remove_from_tail(self):
...
```
**Cache** implementation:
```python
class Cache(object) :
class Cache(object):
def __init__(self, MAX_SIZE) :
def __init__(self, MAX_SIZE):
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {} # key: query, value: node
self.linked_list = LinkedList()
self.linked_list = LinkedList()
def get(self, query)
def get(self, query)
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
@ -167,10 +167,10 @@ class Cache(object) :
node = self.lookup[query]
if node is None:
return None
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
return node.results
def set(self, results, query) :
def set(self, results, query):
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
@ -181,18 +181,18 @@ class Cache(object) :
if node is not None:
# Key exists in cache, update the value
node.results = results
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node
```
@ -204,15 +204,15 @@ The cache should be updated when:
* The page is removed or a new page is added
* The page rank changes
The most straightforward way to handle these cases is to simply set a max time that a cached entry can stay in the cache before it is updated, usually referred to as time to live (TTL) .
The most straightforward way to handle these cases is to simply set a max time that a cached entry can stay in the cache before it is updated, usually referred to as time to live (TTL).
Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside) .
Refer to [When to update the cache](https://github.com/donnemartin/system-design-primer#when-to-update-the-cache) for tradeoffs and alternatives. The approach above describes [cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside).
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/4j99mhe.png)
![Imgur](http://i.imgur.com/4j99mhe.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -224,14 +224,14 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
### Expanding the Memory Cache to many machines
@ -239,7 +239,7 @@ To handle the heavy request load and the large amount of memory needed, we'll sc
* **Each machine in the cache cluster has its own cache** - Simple, although it will likely result in a low cache hit rate.
* **Each machine in the cache cluster has a copy of the cache** - Simple, although it is an inefficient use of memory.
* **The cache is [sharded](https://github.com/donnemartin/system-design-primer#sharding) across all machines in the cache cluster** - More complex, although it is likely the best option. We could use hashing to determine which machine could have the cached results of a query using `machine = hash(query) `. We'll likely want to use [consistent hashing](https://github.com/donnemartin/system-design-primer#under-development) .
* **The cache is [sharded](https://github.com/donnemartin/system-design-primer#sharding) across all machines in the cache cluster** - More complex, although it is likely the best option. We could use hashing to determine which machine could have the cached results of a query using `machine = hash(query)`. We'll likely want to use [consistent hashing](https://github.com/donnemartin/system-design-primer#under-development).
## Additional talking points
@ -247,58 +247,58 @@ To handle the heavy request load and the large amount of memory needed, we'll sc
### SQL scaling patterns
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -1,59 +1,59 @@
# -*- coding: utf-8 -*-
class QueryApi(object) :
class QueryApi(object):
def __init__(self, memory_cache, reverse_index_cluster) :
def __init__(self, memory_cache, reverse_index_cluster):
self.memory_cache = memory_cache
self.reverse_index_cluster = reverse_index_cluster
def parse_query(self, query) :
def parse_query(self, query):
"""Remove markup, break text into terms, deal with typos,
normalize capitalization, convert to use boolean operations.
"""
...
def process_query(self, query) :
query = self.parse_query(query)
results = self.memory_cache.get(query)
def process_query(self, query):
query = self.parse_query(query)
results = self.memory_cache.get(query)
if results is None:
results = self.reverse_index_cluster.process_search(query)
self.memory_cache.set(query, results)
results = self.reverse_index_cluster.process_search(query)
self.memory_cache.set(query, results)
return results
class Node(object) :
class Node(object):
def __init__(self, query, results) :
def __init__(self, query, results):
self.query = query
self.results = results
class LinkedList(object) :
class LinkedList(object):
def __init__(self) :
def __init__(self):
self.head = None
self.tail = None
def move_to_front(self, node) :
def move_to_front(self, node):
...
def append_to_front(self, node) :
def append_to_front(self, node):
...
def remove_from_tail(self) :
def remove_from_tail(self):
...
class Cache(object) :
class Cache(object):
def __init__(self, MAX_SIZE) :
def __init__(self, MAX_SIZE):
self.MAX_SIZE = MAX_SIZE
self.size = 0
self.lookup = {}
self.linked_list = LinkedList()
self.linked_list = LinkedList()
def get(self, query) :
def get(self, query):
"""Get the stored query result from the cache.
Accessing a node updates its position to the front of the LRU list.
@ -61,10 +61,10 @@ class Cache(object) :
node = self.lookup[query]
if node is None:
return None
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
return node.results
def set(self, results, query) :
def set(self, results, query):
"""Set the result for the given query key in the cache.
When updating an entry, updates its position to the front of the LRU list.
@ -75,16 +75,16 @@ class Cache(object) :
if node is not None:
# Key exists in cache, update the value
node.results = results
self.linked_list.move_to_front(node)
self.linked_list.move_to_front(node)
else:
# Key does not exist in cache
if self.size == self.MAX_SIZE:
# Remove the oldest entry from the linked list and lookup
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
self.lookup.pop(self.linked_list.tail.query, None)
self.linked_list.remove_from_tail()
else:
self.size += 1
# Add the new key and value
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
new_node = Node(query, results)
self.linked_list.append_to_front(new_node)
self.lookup[query] = new_node

View File

@ -1,6 +1,6 @@
# 为 Amazon 设计分类售卖排行
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@ -70,7 +70,7 @@
> 列出所有重要组件以规划概要设计。
![Imgur](http://i.imgur.com/vwMa1Qu.png)
![Imgur](http://i.imgur.com/vwMa1Qu.png)
## 第三步:设计核心组件
@ -95,94 +95,94 @@ t5 product4 category1 1 5.00 5 6
...
```
**售卖排行服务** 需要用到 **MapReduce**,并使用 **售卖 API** 服务进行日志记录,同时将结果写入 **SQL 数据库**中的总表 `sales_rank` 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
**售卖排行服务** 需要用到 **MapReduce**,并使用 **售卖 API** 服务进行日志记录,同时将结果写入 **SQL 数据库**中的总表 `sales_rank` 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
我们需要通过以下步骤使用 **MapReduce**
* **第 1 步** - 将数据转换为 `(category, product_id) , sum(quantity) ` 的形式
* **第 1 步** - 将数据转换为 `(category, product_id), sum(quantity)` 的形式
* **第 2 步** - 执行分布式排序
```python
class SalesRanker(MRJob) :
class SalesRanker(MRJob):
def within_past_week(self, timestamp) :
def within_past_week(self, timestamp):
"""如果时间戳属于过去的一周则返回 True
否则返回 False。"""
...
def mapper(self, _ line) :
def mapper(self, _ line):
"""解析日志的每一行,提取并转换相关行,
将键值对设定为如下形式:
(category1, product1) , 2
(category2, product1) , 2
(category2, product1) , 1
(category1, product2) , 3
(category2, product3) , 7
(category1, product4) , 1
(category1, product1), 2
(category2, product1), 2
(category2, product1), 1
(category1, product2), 3
(category2, product3), 7
(category1, product4), 1
"""
timestamp, product_id, category_id, quantity, total_price, seller_id, \
buyer_id = line.split('\t')
if self.within_past_week(timestamp) :
yield (category_id, product_id) , quantity
buyer_id = line.split('\t')
if self.within_past_week(timestamp):
yield (category_id, product_id), quantity
def reducer(self, key, value) :
def reducer(self, key, value):
"""将每个 key 的值加起来。
(category1, product1) , 2
(category2, product1) , 3
(category1, product2) , 3
(category2, product3) , 7
(category1, product4) , 1
(category1, product1), 2
(category2, product1), 3
(category1, product2), 3
(category2, product3), 7
(category1, product4), 1
"""
yield key, sum(values)
yield key, sum(values)
def mapper_sort(self, key, value) :
def mapper_sort(self, key, value):
"""构造 key 以确保正确的排序。
将键值对转换成如下形式:
(category1, 2) , product1
(category2, 3) , product1
(category1, 3) , product2
(category2, 7) , product3
(category1, 1) , product4
(category1, 2), product1
(category2, 3), product1
(category1, 3), product2
(category2, 7), product3
(category1, 1), product4
MapReduce 的随机排序步骤会将键
值的排序打乱,变成下面这样:
(category1, 1) , product4
(category1, 2) , product1
(category1, 3) , product2
(category2, 3) , product1
(category2, 7) , product3
(category1, 1), product4
(category1, 2), product1
(category1, 3), product2
(category2, 3), product1
(category2, 7), product3
"""
category_id, product_id = key
quantity = value
yield (category_id, quantity) , product_id
yield (category_id, quantity), product_id
def reducer_identity(self, key, value) :
def reducer_identity(self, key, value):
yield key, value
def steps(self) :
def steps(self):
""" 此处为 map reduce 步骤"""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer) ,
reducer=self.reducer),
self.mr(mapper=self.mapper_sort,
reducer=self.reducer_identity) ,
reducer=self.reducer_identity),
]
```
得到的结果将会是如下的排序列,我们将其插入 `sales_rank` 表中:
```
(category1, 1) , product4
(category1, 2) , product1
(category1, 3) , product2
(category2, 3) , product1
(category2, 7) , product3
(category1, 1), product4
(category1, 2), product1
(category1, 3), product2
(category2, 3), product1
(category2, 7), product3
```
`sales_rank` 表的数据结构如下:
@ -192,20 +192,20 @@ id int NOT NULL AUTO_INCREMENT
category_id int NOT NULL
total_sold int NOT NULL
product_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(category_id) REFERENCES Categories(id)
FOREIGN KEY(product_id) REFERENCES Products(id)
PRIMARY KEY(id)
FOREIGN KEY(category_id) REFERENCES Categories(id)
FOREIGN KEY(product_id) REFERENCES Products(id)
```
我们会以 `id`、`category_id` 与 `product_id` 创建一个 [索引](https://github.com/donnemartin/system-design-primer#use-good-indices) 以加快查询速度(只需要使用读取日志的时间,不再需要每次都扫描整个数据表)并让数据常驻内存。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数>1</a></sup>
我们会以 `id`、`category_id` 与 `product_id` 创建一个 [索引](https://github.com/donnemartin/system-design-primer#use-good-indices)以加快查询速度(只需要使用读取日志的时间,不再需要每次都扫描整个数据表)并让数据常驻内存。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数>1</a></sup>
### 用例:用户需要根据分类浏览上周中最受欢迎的商品
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) **Web 服务器**发送一个请求
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
* 这个 **Web 服务器**将请求转发给**查询 API** 服务
* The **查询 API** 服务将从 **SQL 数据库**`sales_rank` 表中读取数据
我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
```
$ curl https://amazon.com/api/v1/popular?category_id=1234
@ -234,13 +234,13 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
},
```
而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
![Imgur](http://i.imgur.com/MzExP06.png)
![Imgur](http://i.imgur.com/MzExP06.png)
**重要提示:不要从最初设计直接跳到最终设计中!**
@ -250,19 +250,19 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**分析数据库** 可以用现成的数据仓储系统,例如使用 Amazon Redshift 或者 Google BigQuery 的解决方案。
@ -274,10 +274,10 @@ $ curl https://amazon.com/api/v1/popular?category_id=1234
SQL 缩放模式包括:
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**
@ -287,50 +287,50 @@ SQL 缩放模式包括:
#### NoSQL
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
### 延迟数值
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续探讨

View File

@ -70,7 +70,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/vwMa1Qu.png)
![Imgur](http://i.imgur.com/vwMa1Qu.png)
## Step 3: Design core components
@ -95,93 +95,93 @@ t5 product4 category1 1 5.00 5 6
...
```
The **Sales Rank Service** could use **MapReduce**, using the **Sales API** server log files as input and writing the results to an aggregate table `sales_rank` in a **SQL Database**. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
The **Sales Rank Service** could use **MapReduce**, using the **Sales API** server log files as input and writing the results to an aggregate table `sales_rank` in a **SQL Database**. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
We'll use a multi-step **MapReduce**:
* **Step 1** - Transform the data to `(category, product_id) , sum(quantity) `
* **Step 1** - Transform the data to `(category, product_id), sum(quantity)`
* **Step 2** - Perform a distributed sort
```python
class SalesRanker(MRJob) :
class SalesRanker(MRJob):
def within_past_week(self, timestamp) :
def within_past_week(self, timestamp):
"""Return True if timestamp is within past week, False otherwise."""
...
def mapper(self, _ line) :
def mapper(self, _ line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(category1, product1) , 2
(category2, product1) , 2
(category2, product1) , 1
(category1, product2) , 3
(category2, product3) , 7
(category1, product4) , 1
(category1, product1), 2
(category2, product1), 2
(category2, product1), 1
(category1, product2), 3
(category2, product3), 7
(category1, product4), 1
"""
timestamp, product_id, category_id, quantity, total_price, seller_id, \
buyer_id = line.split('\t')
if self.within_past_week(timestamp) :
yield (category_id, product_id) , quantity
buyer_id = line.split('\t')
if self.within_past_week(timestamp):
yield (category_id, product_id), quantity
def reducer(self, key, value) :
def reducer(self, key, value):
"""Sum values for each key.
(category1, product1) , 2
(category2, product1) , 3
(category1, product2) , 3
(category2, product3) , 7
(category1, product4) , 1
(category1, product1), 2
(category2, product1), 3
(category1, product2), 3
(category2, product3), 7
(category1, product4), 1
"""
yield key, sum(values)
yield key, sum(values)
def mapper_sort(self, key, value) :
def mapper_sort(self, key, value):
"""Construct key to ensure proper sorting.
Transform key and value to the form:
(category1, 2) , product1
(category2, 3) , product1
(category1, 3) , product2
(category2, 7) , product3
(category1, 1) , product4
(category1, 2), product1
(category2, 3), product1
(category1, 3), product2
(category2, 7), product3
(category1, 1), product4
The shuffle/sort step of MapReduce will then do a
distributed sort on the keys, resulting in:
(category1, 1) , product4
(category1, 2) , product1
(category1, 3) , product2
(category2, 3) , product1
(category2, 7) , product3
(category1, 1), product4
(category1, 2), product1
(category1, 3), product2
(category2, 3), product1
(category2, 7), product3
"""
category_id, product_id = key
quantity = value
yield (category_id, quantity) , product_id
yield (category_id, quantity), product_id
def reducer_identity(self, key, value) :
def reducer_identity(self, key, value):
yield key, value
def steps(self) :
def steps(self):
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer) ,
reducer=self.reducer),
self.mr(mapper=self.mapper_sort,
reducer=self.reducer_identity) ,
reducer=self.reducer_identity),
]
```
The result would be the following sorted list, which we could insert into the `sales_rank` table:
```
(category1, 1) , product4
(category1, 2) , product1
(category1, 3) , product2
(category2, 3) , product1
(category2, 7) , product3
(category1, 1), product4
(category1, 2), product1
(category1, 3), product2
(category2, 3), product1
(category2, 7), product3
```
The `sales_rank` table could have the following structure:
@ -191,20 +191,20 @@ id int NOT NULL AUTO_INCREMENT
category_id int NOT NULL
total_sold int NOT NULL
product_id int NOT NULL
PRIMARY KEY(id)
FOREIGN KEY(category_id) REFERENCES Categories(id)
FOREIGN KEY(product_id) REFERENCES Products(id)
PRIMARY KEY(id)
FOREIGN KEY(category_id) REFERENCES Categories(id)
FOREIGN KEY(product_id) REFERENCES Products(id)
```
We'll create an [index](https://github.com/donnemartin/system-design-primer#use-good-indices) on `id `, `category_id`, and `product_id` to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
### Use case: User views the past week's most popular products by category
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Read API** server
* The **Read API** server reads from the **SQL Database** `sales_rank` table
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl https://amazon.com/api/v1/popular?category_id=1234
@ -233,13 +233,13 @@ Response:
},
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/MzExP06.png)
![Imgur](http://i.imgur.com/MzExP06.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -251,33 +251,33 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Analytics Database** could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
We might only want to store a limited time period of data in the database, while storing the rest in a data warehouse or in an **Object Store**. An **Object Store** such as Amazon S3 can comfortably handle the constraint of 40 GB of new content per month.
To address the 40,000 *average* read requests per second (higher at peak) , traffic for popular content (and their sales rank) should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. With the large volume of reads, the **SQL Read Replicas** might not be able to handle the cache misses. We'll probably need to employ additional SQL scaling patterns.
To address the 40,000 *average* read requests per second (higher at peak), traffic for popular content (and their sales rank) should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. With the large volume of reads, the **SQL Read Replicas** might not be able to handle the cache misses. We'll probably need to employ additional SQL scaling patterns.
400 *average* writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**, also pointing to a need for additional scaling techniques.
SQL scaling patterns include:
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@ -287,50 +287,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -3,75 +3,75 @@
from mrjob.job import MRJob
class SalesRanker(MRJob) :
class SalesRanker(MRJob):
def within_past_week(self, timestamp) :
def within_past_week(self, timestamp):
"""Return True if timestamp is within past week, False otherwise."""
...
def mapper(self, _, line) :
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(foo, p1) , 2
(bar, p1) , 2
(bar, p1) , 1
(foo, p2) , 3
(bar, p3) , 10
(foo, p4) , 1
(foo, p1), 2
(bar, p1), 2
(bar, p1), 1
(foo, p2), 3
(bar, p3), 10
(foo, p4), 1
"""
timestamp, product_id, category, quantity = line.split('\t')
if self.within_past_week(timestamp) :
yield (category, product_id) , quantity
timestamp, product_id, category, quantity = line.split('\t')
if self.within_past_week(timestamp):
yield (category, product_id), quantity
def reducer(self, key, values) :
def reducer(self, key, values):
"""Sum values for each key.
(foo, p1) , 2
(bar, p1) , 3
(foo, p2) , 3
(bar, p3) , 10
(foo, p4) , 1
(foo, p1), 2
(bar, p1), 3
(foo, p2), 3
(bar, p3), 10
(foo, p4), 1
"""
yield key, sum(values)
yield key, sum(values)
def mapper_sort(self, key, value) :
def mapper_sort(self, key, value):
"""Construct key to ensure proper sorting.
Transform key and value to the form:
(foo, 2) , p1
(bar, 3) , p1
(foo, 3) , p2
(bar, 10) , p3
(foo, 1) , p4
(foo, 2), p1
(bar, 3), p1
(foo, 3), p2
(bar, 10), p3
(foo, 1), p4
The shuffle/sort step of MapReduce will then do a
distributed sort on the keys, resulting in:
(category1, 1) , product4
(category1, 2) , product1
(category1, 3) , product2
(category2, 3) , product1
(category2, 7) , product3
(category1, 1), product4
(category1, 2), product1
(category1, 3), product2
(category2, 3), product1
(category2, 7), product3
"""
category, product_id = key
quantity = value
yield (category, quantity) , product_id
yield (category, quantity), product_id
def reducer_identity(self, key, value) :
def reducer_identity(self, key, value):
yield key, value
def steps(self) :
def steps(self):
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer) ,
reducer=self.reducer),
self.mr(mapper=self.mapper_sort,
reducer=self.reducer_identity) ,
reducer=self.reducer_identity),
]
if __name__ == '__main__':
SalesRanker.run()
SalesRanker.run()

View File

@ -64,7 +64,7 @@
> 用所有重要组件概述高水平设计
![Imgur](http://i.imgur.com/B8LDKD7.png)
![Imgur](http://i.imgur.com/B8LDKD7.png)
## 第 3 步:设计核心组件
@ -83,7 +83,7 @@
* **Web 服务器** 在 EC2 上
* 存储用户数据
* [**MySQL 数据库**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [**MySQL 数据库**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
运用 **纵向扩展**
@ -96,7 +96,7 @@
**折中方案, 可选方案, 和其他细节:**
* **纵向扩展** 的可选方案是 [**横向扩展**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* **纵向扩展** 的可选方案是 [**横向扩展**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
#### 自 SQL 开始,但认真考虑 NoSQL
@ -104,7 +104,7 @@
**折中方案, 可选方案, 和其他细节:**
* 查阅 [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
* 查阅 [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
* 讨论使用 [SQL 或 NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) 的原因
#### 分配公共静态 IP
@ -139,7 +139,7 @@
### 用户+
![Imgur](http://i.imgur.com/rrfjMXB.png)
![Imgur](http://i.imgur.com/rrfjMXB.png)
#### 假设
@ -191,7 +191,7 @@
### 用户+++
![Imgur](http://i.imgur.com/raoFTXM.png)
![Imgur](http://i.imgur.com/raoFTXM.png)
#### 假设
@ -208,11 +208,11 @@
* 终止在 **负载平衡器** 上的SSL以减少后端服务器上的计算负载并简化证书管理
* 在多个可用区域中使用多台 **Web服务器**
* 在多个可用区域的 [**主-从 故障转移**](https://github.com/donnemartin/system-design-primer#master-slave-replication) 模式中使用多个 **MySQL** 实例来改进冗余
* 分离 **Web 服务器** 和 [**应用服务器**](https://github.com/donnemartin/system-design-primer#application-layer)
* 分离 **Web 服务器** 和 [**应用服务器**](https://github.com/donnemartin/system-design-primer#application-layer)
* 独立扩展和配置每一层
* **Web 服务器** 可以作为 [**反向代理**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* **Web 服务器** 可以作为 [**反向代理**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* 例如, 你可以添加 **应用服务器** 处理 **读 API** 而另外一些处理 **写 API**
* 将静态(和一些动态)内容转移到 [**内容分发网络 (CDN) **](https://github.com/donnemartin/system-design-primer#content-delivery-network) 例如 CloudFront 以减少负载和延迟
* 将静态(和一些动态)内容转移到 [**内容分发网络 (CDN)**](https://github.com/donnemartin/system-design-primer#content-delivery-network) 例如 CloudFront 以减少负载和延迟
**折中方案, 可选方案, 和其他细节:**
@ -220,7 +220,7 @@
### 用户+++
![Imgur](http://i.imgur.com/OZCxJr0.png)
![Imgur](http://i.imgur.com/OZCxJr0.png)
**注意:** **内部负载均衡** 不显示以减少混乱
@ -232,7 +232,7 @@
* 下面的目标试图解决 **MySQL数据库** 的伸缩性问题
* * 基于 **基准/负载测试** 和 **分析**,你可能只需要实现其中的一两个技术
* 将下列数据移动到一个 [**内存缓存**](https://github.com/donnemartin/system-design-primer#cache) ,例如弹性缓存,以减少负载和延迟:
* 将下列数据移动到一个 [**内存缓存**](https://github.com/donnemartin/system-design-primer#cache),例如弹性缓存,以减少负载和延迟:
* **MySQL** 中频繁访问的内容
* 首先, 尝试配置 **MySQL 数据库** 缓存以查看是否足以在实现 **内存缓存** 之前缓解瓶颈
* 来自 **Web 服务器** 的会话数据
@ -254,11 +254,11 @@
**折中方案, 可选方案, 和其他细节:**
* 查阅 [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
* 查阅 [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 章节
### 用户++++
![Imgur](http://i.imgur.com/3X8nmdL.png)
![Imgur](http://i.imgur.com/3X8nmdL.png)
#### 假设
@ -297,7 +297,7 @@
### 用户+++++
![Imgur](http://i.imgur.com/jj3A5N8.png)
![Imgur](http://i.imgur.com/jj3A5N8.png)
**注释:** **自动伸缩** 组不显示以减少混乱
@ -317,10 +317,10 @@
SQL 扩展模型包括:
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分片](https://github.com/donnemartin/system-design-primer#sharding)
* [反范式](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分片](https://github.com/donnemartin/system-design-primer#sharding)
* [反范式](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
为了进一步处理高读和写请求,我们还应该考虑将适当的数据移动到一个 [**NoSQL数据库**](https://github.com/donnemartin/system-design-primer#nosql) ,例如 DynamoDB。
@ -344,58 +344,58 @@ SQL 扩展模型包括:
### SQL 扩展模式
* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分区](https://github.com/donnemartin/system-design-primer#sharding)
* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分区](https://github.com/donnemartin/system-design-primer#sharding)
* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### 缓存
* 缓存到哪里
* [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
* [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
* [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* 缓存什么
* [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* 何时更新缓存
* [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
* [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
* [延迟写 (写回) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
* [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
* [延迟写 (写回)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### 异步性和微服务
* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
### 沟通
* 关于折中方案的讨论:
* 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
* 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
### 安全性
参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
### 延迟数字指标
查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
### 正在进行

View File

@ -64,7 +64,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/B8LDKD7.png)
![Imgur](http://i.imgur.com/B8LDKD7.png)
## Step 3: Design core components
@ -83,7 +83,7 @@ Handy conversion guide:
* **Web server** on EC2
* Storage for user data
* [**MySQL Database**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [**MySQL Database**](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
Use **Vertical Scaling**:
@ -96,7 +96,7 @@ Use **Vertical Scaling**:
*Trade-offs, alternatives, and additional details:*
* The alternative to **Vertical Scaling** is [**Horizontal scaling**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* The alternative to **Vertical Scaling** is [**Horizontal scaling**](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
#### Start with SQL, consider NoSQL
@ -104,8 +104,8 @@ The constraints assume there is a need for relational data. We can start off us
*Trade-offs, alternatives, and additional details:*
* See the [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
* Discuss reasons to use [SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* See the [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
* Discuss reasons to use [SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
#### Assign a public static IP
@ -139,7 +139,7 @@ Add a **DNS** such as Route 53 to map the domain to the instance's public IP.
### Users+
![Imgur](http://i.imgur.com/rrfjMXB.png)
![Imgur](http://i.imgur.com/rrfjMXB.png)
#### Assumptions
@ -191,7 +191,7 @@ We've been able to address these issues with **Vertical Scaling** so far. Unfor
### Users++
![Imgur](http://i.imgur.com/raoFTXM.png)
![Imgur](http://i.imgur.com/raoFTXM.png)
#### Assumptions
@ -208,11 +208,11 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our single **Web Serve
* Terminate SSL on the **Load Balancer** to reduce computational load on backend servers and to simplify certificate administration
* Use multiple **Web Servers** spread out over multiple availability zones
* Use multiple **MySQL** instances in [**Master-Slave Failover**](https://github.com/donnemartin/system-design-primer#master-slave-replication) mode across multiple availability zones to improve redundancy
* Separate out the **Web Servers** from the [**Application Servers**](https://github.com/donnemartin/system-design-primer#application-layer)
* Separate out the **Web Servers** from the [**Application Servers**](https://github.com/donnemartin/system-design-primer#application-layer)
* Scale and configure both layers independently
* **Web Servers** can run as a [**Reverse Proxy**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* **Web Servers** can run as a [**Reverse Proxy**](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* For example, you can add **Application Servers** handling **Read APIs** while others handle **Write APIs**
* Move static (and some dynamic) content to a [**Content Delivery Network (CDN) **](https://github.com/donnemartin/system-design-primer#content-delivery-network) such as CloudFront to reduce load and latency
* Move static (and some dynamic) content to a [**Content Delivery Network (CDN)**](https://github.com/donnemartin/system-design-primer#content-delivery-network) such as CloudFront to reduce load and latency
*Trade-offs, alternatives, and additional details:*
@ -220,7 +220,7 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our single **Web Serve
### Users+++
![Imgur](http://i.imgur.com/OZCxJr0.png)
![Imgur](http://i.imgur.com/OZCxJr0.png)
**Note:** **Internal Load Balancers** not shown to reduce clutter
@ -249,16 +249,16 @@ Our **Benchmarks/Load Tests** and **Profiling** show that we are read-heavy (100
* In addition to adding and scaling a **Memory Cache**, **MySQL Read Replicas** can also help relieve load on the **MySQL Write Master**
* Add logic to **Web Server** to separate out writes and reads
* Add **Load Balancers** in front of **MySQL Read Replicas** (not pictured to reduce clutter)
* Add **Load Balancers** in front of **MySQL Read Replicas** (not pictured to reduce clutter)
* Most services are read-heavy vs write-heavy
*Trade-offs, alternatives, and additional details:*
* See the [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
* See the [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) section
### Users++++
![Imgur](http://i.imgur.com/3X8nmdL.png)
![Imgur](http://i.imgur.com/3X8nmdL.png)
#### Assumptions
@ -297,7 +297,7 @@ Our **Benchmarks/Load Tests** and **Profiling** show that our traffic spikes dur
### Users+++++
![Imgur](http://i.imgur.com/jj3A5N8.png)
![Imgur](http://i.imgur.com/jj3A5N8.png)
**Note:** **Autoscaling** groups not shown to reduce clutter
@ -317,10 +317,10 @@ We'll continue to address scaling issues due to the problem's constraints:
SQL scaling patterns include:
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
To further address the high read and write requests, we should also consider moving appropriate data to a [**NoSQL Database**](https://github.com/donnemartin/system-design-primer#nosql) such as DynamoDB.
@ -344,58 +344,58 @@ We can further separate out our [**Application Servers**](https://github.com/don
### SQL scaling patterns
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -29,7 +29,7 @@
* 每个用户平均有 50 个朋友
* 每月 10 亿次朋友搜索
训练使用更传统的系统 - 别用图特有的解决方案例如 [GraphQL](http://graphql.org/) 或图数据库如 [Neo4j](https://neo4j.com/)
训练使用更传统的系统 - 别用图特有的解决方案例如 [GraphQL](http://graphql.org/) 或图数据库如 [Neo4j](https://neo4j.com/)。
#### 计算使用
@ -50,7 +50,7 @@
> 用所有重要组件概述高水平设计
![Imgur](http://i.imgur.com/wxXyq2J.png)
![Imgur](http://i.imgur.com/wxXyq2J.png)
## 第 3 步:设计核心组件
@ -63,37 +63,37 @@
没有百万用户(点)的和十亿朋友关系(边)的限制,我们能够用一般 BFS 方法解决无权重最短路径任务:
```python
class Graph(Graph) :
class Graph(Graph):
def shortest_path(self, source, dest) :
def shortest_path(self, source, dest):
if source is None or dest is None:
return None
if source is dest:
return [source.key]
prev_node_keys = self._shortest_path(source, dest)
prev_node_keys = self._shortest_path(source, dest)
if prev_node_keys is None:
return None
else:
path_ids = [dest.key]
prev_node_key = prev_node_keys[dest.key]
while prev_node_key is not None:
path_ids.append(prev_node_key)
path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
return path_ids[::-1]
def _shortest_path(self, source, dest) :
queue = deque()
queue.append(source)
def _shortest_path(self, source, dest):
queue = deque()
queue.append(source)
prev_node_keys = {source.key: None}
source.visit_state = State.visited
while queue:
node = queue.popleft()
node = queue.popleft()
if node is dest:
return prev_node_keys
prev_node = node
for adj_node in node.adj_nodes.values() :
for adj_node in node.adj_nodes.values():
if adj_node.visit_state == State.unvisited:
queue.append(adj_node)
queue.append(adj_node)
prev_node_keys[adj_node.key] = prev_node.key
adj_node.visit_state = State.visited
return None
@ -101,7 +101,7 @@ class Graph(Graph) :
我们不能在同一台机器上满足所有用户,我们需要通过 **人员服务器** [拆分](https://github.com/donnemartin/system-design-primer#sharding) 用户并且通过 **查询服务** 访问。
* **客户端** 向 **服务器** 发送请求,**服务器** 作为 [反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* **客户端** 向 **服务器** 发送请求,**服务器** 作为 [反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* **搜索 API** 服务器向 **用户图服务** 转发请求
* **用户图服务** 有以下功能:
* 使用 **查询服务** 找到当前用户信息存储的 **人员服务器**
@ -117,43 +117,43 @@ class Graph(Graph) :
**查询服务** 实现:
```python
class LookupService(object) :
class LookupService(object):
def __init__(self) :
self.lookup = self._init_lookup() # key: person_id, value: person_server
def __init__(self):
self.lookup = self._init_lookup() # key: person_id, value: person_server
def _init_lookup(self) :
def _init_lookup(self):
...
def lookup_person_server(self, person_id) :
def lookup_person_server(self, person_id):
return self.lookup[person_id]
```
**人员服务器** 实现:
```python
class PersonServer(object) :
class PersonServer(object):
def __init__(self) :
def __init__(self):
self.people = {} # key: person_id, value: person
def add_person(self, person) :
def add_person(self, person):
...
def people(self, ids) :
def people(self, ids):
results = []
for id in ids:
if id in self.people:
results.append(self.people[id])
results.append(self.people[id])
return results
```
**用户** 实现:
```python
class Person(object) :
class Person(object):
def __init__(self, id, name, friend_ids) :
def __init__(self, id, name, friend_ids):
self.id = id
self.name = name
self.friend_ids = friend_ids
@ -162,21 +162,21 @@ class Person(object) :
**用户图服务** 实现:
```python
class UserGraphService(object) :
class UserGraphService(object):
def __init__(self, lookup_service) :
def __init__(self, lookup_service):
self.lookup_service = lookup_service
def person(self, person_id) :
person_server = self.lookup_service.lookup_person_server(person_id)
return person_server.people([person_id])
def person(self, person_id):
person_server = self.lookup_service.lookup_person_server(person_id)
return person_server.people([person_id])
def shortest_path(self, source_key, dest_key) :
def shortest_path(self, source_key, dest_key):
if source_key is None or dest_key is None:
return None
if source_key is dest_key:
return [source_key]
prev_node_keys = self._shortest_path(source_key, dest_key)
prev_node_keys = self._shortest_path(source_key, dest_key)
if prev_node_keys is None:
return None
else:
@ -184,40 +184,40 @@ class UserGraphService(object) :
path_ids = [dest_key]
prev_node_key = prev_node_keys[dest_key]
while prev_node_key is not None:
path_ids.append(prev_node_key)
path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
# Reverse the list since we iterated backwards
return path_ids[::-1]
def _shortest_path(self, source_key, dest_key, path) :
def _shortest_path(self, source_key, dest_key, path):
# Use the id to get the Person
source = self.person(source_key)
source = self.person(source_key)
# Update our bfs queue
queue = deque()
queue.append(source)
queue = deque()
queue.append(source)
# prev_node_keys keeps track of each hop from
# the source_key to the dest_key
prev_node_keys = {source_key: None}
# We'll use visited_ids to keep track of which nodes we've
# visited, which can be different from a typical bfs where
# this can be stored in the node itself
visited_ids = set()
visited_ids.add(source.id)
visited_ids = set()
visited_ids.add(source.id)
while queue:
node = queue.popleft()
node = queue.popleft()
if node.key is dest_key:
return prev_node_keys
prev_node = node
for friend_id in node.friend_ids:
if friend_id not in visited_ids:
friend_node = self.person(friend_id)
queue.append(friend_node)
friend_node = self.person(friend_id)
queue.append(friend_node)
prev_node_keys[friend_id] = prev_node.key
visited_ids.add(friend_id)
visited_ids.add(friend_id)
return None
```
我们用的是公共的 [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
我们用的是公共的 [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
```
$ curl https://social.com/api/v1/friend_search?person_id=1234
@ -243,13 +243,13 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
},
```
内部通信使用 [远端过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
内部通信使用 [远端过程调用](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)。
## 第 4 步:扩展设计
> 在给定约束条件下,定义和确认瓶颈。
![Imgur](http://i.imgur.com/cdCv5g7.png)
![Imgur](http://i.imgur.com/cdCv5g7.png)
**重要:别简化从最初设计到最终设计的过程!**
@ -261,14 +261,14 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
**避免重复讨论**,以下网址链接到 [系统设计主题](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) 相关的主流方案、折中方案和替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [负载均衡](https://github.com/donnemartin/system-design-primer#load-balancer)
* [横向扩展](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer#application-layer)
* [缓存](https://github.com/donnemartin/system-design-primer#cache)
* [一致性模式](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [可用性模式](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [负载均衡](https://github.com/donnemartin/system-design-primer#load-balancer)
* [横向扩展](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer#application-layer)
* [缓存](https://github.com/donnemartin/system-design-primer#cache)
* [一致性模式](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [可用性模式](https://github.com/donnemartin/system-design-primer#availability-patterns)
解决 **平均** 每秒 400 次请求的限制(峰值),人员数据可以存在例如 Redis 或 Memcached 这样的 **内存** 中以减少响应次数和下游流量通信服务。这尤其在用户执行多次连续查询和查询哪些广泛连接的人时十分有用。从内存中读取 1MB 数据大约要 250 微秒,从 SSD 中读取同样大小的数据时间要长 4 倍,从硬盘要长 80 倍。<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
@ -279,9 +279,9 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
* 在同一台 **人员服务器** 上托管批处理同一批朋友查找减少机器跳转
* 通过地理位置 [拆分](https://github.com/donnemartin/system-design-primer#sharding) **人员服务器** 来进一步优化,因为朋友通常住得都比较近
* 同时进行两个 BFS 查找,一个从 source 开始,一个从 destination 开始,然后合并两个路径
* 从有庞大朋友圈的人开始找起,这样更有可能减小当前用户和搜索目标之间的 [离散度数](https://en.wikipedia.org/wiki/Six_degrees_of_separation)
* 从有庞大朋友圈的人开始找起,这样更有可能减小当前用户和搜索目标之间的 [离散度数](https://en.wikipedia.org/wiki/Six_degrees_of_separation)
* 在询问用户是否继续查询之前设置基于时间或跳跃数阈值,当在某些案例中搜索耗费时间过长时。
* 使用类似 [Neo4j](https://neo4j.com/) 的 **图数据库** 或图特定查询语法,例如 [GraphQL](http://graphql.org/) (如果没有禁止使用 **图数据库** 的限制的话)
* 使用类似 [Neo4j](https://neo4j.com/) 的 **图数据库** 或图特定查询语法,例如 [GraphQL](http://graphql.org/)(如果没有禁止使用 **图数据库** 的限制的话)
## 额外的话题
@ -289,58 +289,58 @@ $ curl https://social.com/api/v1/friend_search?person_id=1234
### SQL 扩展模式
* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分区](https://github.com/donnemartin/system-design-primer#sharding)
* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [读取副本](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [集合](https://github.com/donnemartin/system-design-primer#federation)
* [分区](https://github.com/donnemartin/system-design-primer#sharding)
* [反规范化](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL 调优](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [键值存储](https://github.com/donnemartin/system-design-primer#key-value-store)
* [文档存储](https://github.com/donnemartin/system-design-primer#document-store)
* [宽表存储](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [图数据库](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### 缓存
* 缓存到哪里
* [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
* [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* [客户端缓存](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web 服务缓存](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [数据库缓存](https://github.com/donnemartin/system-design-primer#database-caching)
* [应用缓存](https://github.com/donnemartin/system-design-primer#application-caching)
* 缓存什么
* [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [数据库请求层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [对象层缓存](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* 何时更新缓存
* [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
* [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
* [延迟写 (写回) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [预留缓存](https://github.com/donnemartin/system-design-primer#cache-aside)
* [完全写入](https://github.com/donnemartin/system-design-primer#write-through)
* [延迟写 (写回)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [事先更新](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### 异步性和微服务
* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
* [消息队列](https://github.com/donnemartin/system-design-primer#message-queues)
* [任务队列](https://github.com/donnemartin/system-design-primer#task-queues)
* [回退压力](https://github.com/donnemartin/system-design-primer#back-pressure)
* [微服务](https://github.com/donnemartin/system-design-primer#microservices)
### 沟通
* 关于折中方案的讨论:
* 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
* 客户端的外部通讯 - [遵循 REST 的 HTTP APIs](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* 内部通讯 - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [服务探索](https://github.com/donnemartin/system-design-primer#service-discovery)
### 安全性
参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
参考 [安全章节](https://github.com/donnemartin/system-design-primer#security)
### 延迟数字指标
查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
查阅 [每个程序员必懂的延迟数字](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know)
### 正在进行

View File

@ -29,7 +29,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* 50 friends per user average
* 1 billion friend searches per month
Exercise the use of more traditional systems - don't use graph-specific solutions such as [GraphQL](http://graphql.org/) or a graph database like [Neo4j](https://neo4j.com/)
Exercise the use of more traditional systems - don't use graph-specific solutions such as [GraphQL](http://graphql.org/) or a graph database like [Neo4j](https://neo4j.com/)
#### Calculate usage
@ -50,7 +50,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/wxXyq2J.png)
![Imgur](http://i.imgur.com/wxXyq2J.png)
## Step 3: Design core components
@ -60,40 +60,40 @@ Handy conversion guide:
**Clarify with your interviewer how much code you are expected to write**.
Without the constraint of millions of users (vertices) and billions of friend relationships (edges) , we could solve this unweighted shortest path task with a general BFS approach:
Without the constraint of millions of users (vertices) and billions of friend relationships (edges), we could solve this unweighted shortest path task with a general BFS approach:
```python
class Graph(Graph) :
class Graph(Graph):
def shortest_path(self, source, dest) :
def shortest_path(self, source, dest):
if source is None or dest is None:
return None
if source is dest:
return [source.key]
prev_node_keys = self._shortest_path(source, dest)
prev_node_keys = self._shortest_path(source, dest)
if prev_node_keys is None:
return None
else:
path_ids = [dest.key]
prev_node_key = prev_node_keys[dest.key]
while prev_node_key is not None:
path_ids.append(prev_node_key)
path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
return path_ids[::-1]
def _shortest_path(self, source, dest) :
queue = deque()
queue.append(source)
def _shortest_path(self, source, dest):
queue = deque()
queue.append(source)
prev_node_keys = {source.key: None}
source.visit_state = State.visited
while queue:
node = queue.popleft()
node = queue.popleft()
if node is dest:
return prev_node_keys
prev_node = node
for adj_node in node.adj_nodes.values() :
for adj_node in node.adj_nodes.values():
if adj_node.visit_state == State.unvisited:
queue.append(adj_node)
queue.append(adj_node)
prev_node_keys[adj_node.key] = prev_node.key
adj_node.visit_state = State.visited
return None
@ -101,7 +101,7 @@ class Graph(Graph) :
We won't be able to fit all users on the same machine, we'll need to [shard](https://github.com/donnemartin/system-design-primer#sharding) users across **Person Servers** and access them with a **Lookup Service**.
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Search API** server
* The **Search API** server forwards the request to the **User Graph Service**
* The **User Graph Service** does the following:
@ -109,7 +109,7 @@ We won't be able to fit all users on the same machine, we'll need to [shard](htt
* Finds the appropriate **Person Server** to retrieve the current user's list of `friend_ids`
* Runs a BFS search using the current user as the `source` and the current user's `friend_ids` as the ids for each `adjacent_node`
* To get the `adjacent_node` from a given id:
* The **User Graph Service** will *again* need to communicate with the **Lookup Service** to determine which **Person Server** stores the`adjacent_node` matching the given id (potential for optimization)
* The **User Graph Service** will *again* need to communicate with the **Lookup Service** to determine which **Person Server** stores the`adjacent_node` matching the given id (potential for optimization)
**Clarify with your interviewer how much code you should be writing**.
@ -118,43 +118,43 @@ We won't be able to fit all users on the same machine, we'll need to [shard](htt
**Lookup Service** implementation:
```python
class LookupService(object) :
class LookupService(object):
def __init__(self) :
self.lookup = self._init_lookup() # key: person_id, value: person_server
def __init__(self):
self.lookup = self._init_lookup() # key: person_id, value: person_server
def _init_lookup(self) :
def _init_lookup(self):
...
def lookup_person_server(self, person_id) :
def lookup_person_server(self, person_id):
return self.lookup[person_id]
```
**Person Server** implementation:
```python
class PersonServer(object) :
class PersonServer(object):
def __init__(self) :
def __init__(self):
self.people = {} # key: person_id, value: person
def add_person(self, person) :
def add_person(self, person):
...
def people(self, ids) :
def people(self, ids):
results = []
for id in ids:
if id in self.people:
results.append(self.people[id])
results.append(self.people[id])
return results
```
**Person** implementation:
```python
class Person(object) :
class Person(object):
def __init__(self, id, name, friend_ids) :
def __init__(self, id, name, friend_ids):
self.id = id
self.name = name
self.friend_ids = friend_ids
@ -163,21 +163,21 @@ class Person(object) :
**User Graph Service** implementation:
```python
class UserGraphService(object) :
class UserGraphService(object):
def __init__(self, lookup_service) :
def __init__(self, lookup_service):
self.lookup_service = lookup_service
def person(self, person_id) :
person_server = self.lookup_service.lookup_person_server(person_id)
return person_server.people([person_id])
def person(self, person_id):
person_server = self.lookup_service.lookup_person_server(person_id)
return person_server.people([person_id])
def shortest_path(self, source_key, dest_key) :
def shortest_path(self, source_key, dest_key):
if source_key is None or dest_key is None:
return None
if source_key is dest_key:
return [source_key]
prev_node_keys = self._shortest_path(source_key, dest_key)
prev_node_keys = self._shortest_path(source_key, dest_key)
if prev_node_keys is None:
return None
else:
@ -185,40 +185,40 @@ class UserGraphService(object) :
path_ids = [dest_key]
prev_node_key = prev_node_keys[dest_key]
while prev_node_key is not None:
path_ids.append(prev_node_key)
path_ids.append(prev_node_key)
prev_node_key = prev_node_keys[prev_node_key]
# Reverse the list since we iterated backwards
return path_ids[::-1]
def _shortest_path(self, source_key, dest_key, path) :
def _shortest_path(self, source_key, dest_key, path):
# Use the id to get the Person
source = self.person(source_key)
source = self.person(source_key)
# Update our bfs queue
queue = deque()
queue.append(source)
queue = deque()
queue.append(source)
# prev_node_keys keeps track of each hop from
# the source_key to the dest_key
prev_node_keys = {source_key: None}
# We'll use visited_ids to keep track of which nodes we've
# visited, which can be different from a typical bfs where
# this can be stored in the node itself
visited_ids = set()
visited_ids.add(source.id)
visited_ids = set()
visited_ids.add(source.id)
while queue:
node = queue.popleft()
node = queue.popleft()
if node.key is dest_key:
return prev_node_keys
prev_node = node
for friend_id in node.friend_ids:
if friend_id not in visited_ids:
friend_node = self.person(friend_id)
queue.append(friend_node)
friend_node = self.person(friend_id)
queue.append(friend_node)
prev_node_keys[friend_id] = prev_node.key
visited_ids.add(friend_id)
visited_ids.add(friend_id)
return None
```
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl https://social.com/api/v1/friend_search?person_id=1234
@ -244,13 +244,13 @@ Response:
},
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/cdCv5g7.png)
![Imgur](http://i.imgur.com/cdCv5g7.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -262,16 +262,16 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
To address the constraint of 400 *average* read requests per second (higher at peak) , person data can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to reduce traffic to downstream services. This could be especially useful for people who do multiple searches in succession and for people who are well-connected. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
To address the constraint of 400 *average* read requests per second (higher at peak), person data can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to reduce traffic to downstream services. This could be especially useful for people who do multiple searches in succession and for people who are well-connected. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
Below are further optimizations:
@ -282,7 +282,7 @@ Below are further optimizations:
* Do two BFS searches at the same time, one starting from the source, and one from the destination, then merge the two paths
* Start the BFS search from people with large numbers of friends, as they are more likely to reduce the number of [degrees of separation](https://en.wikipedia.org/wiki/Six_degrees_of_separation) between the current user and the search target
* Set a limit based on time or number of hops before asking the user if they want to continue searching, as searching could take a considerable amount of time in some cases
* Use a **Graph Database** such as [Neo4j](https://neo4j.com/) or a graph-specific query language such as [GraphQL](http://graphql.org/) (if there were no constraint preventing the use of **Graph Databases**)
* Use a **Graph Database** such as [Neo4j](https://neo4j.com/) or a graph-specific query language such as [GraphQL](http://graphql.org/) (if there were no constraint preventing the use of **Graph Databases**)
## Additional talking points
@ -290,58 +290,58 @@ Below are further optimizations:
### SQL scaling patterns
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -3,70 +3,70 @@ from collections import deque
from enum import Enum
class State(Enum) :
class State(Enum):
unvisited = 0
visited = 1
class Graph(object) :
class Graph(object):
def bfs(self, source, dest) :
def bfs(self, source, dest):
if source is None:
return False
queue = deque()
queue.append(source)
queue = deque()
queue.append(source)
source.visit_state = State.visited
while queue:
node = queue.popleft()
print(node)
node = queue.popleft()
print(node)
if dest is node:
return True
for adjacent_node in node.adj_nodes.values() :
for adjacent_node in node.adj_nodes.values():
if adjacent_node.visit_state == State.unvisited:
queue.append(adjacent_node)
queue.append(adjacent_node)
adjacent_node.visit_state = State.visited
return False
class Person(object) :
class Person(object):
def __init__(self, id, name) :
def __init__(self, id, name):
self.id = id
self.name = name
self.friend_ids = []
class LookupService(object) :
class LookupService(object):
def __init__(self) :
def __init__(self):
self.lookup = {} # key: person_id, value: person_server
def get_person(self, person_id) :
def get_person(self, person_id):
person_server = self.lookup[person_id]
return person_server.people[person_id]
class PersonServer(object) :
class PersonServer(object):
def __init__(self) :
def __init__(self):
self.people = {} # key: person_id, value: person
def get_people(self, ids) :
def get_people(self, ids):
results = []
for id in ids:
if id in self.people:
results.append(self.people[id])
results.append(self.people[id])
return results
class UserGraphService(object) :
class UserGraphService(object):
def __init__(self, person_ids, lookup) :
def __init__(self, person_ids, lookup):
self.lookup = lookup
self.person_ids = person_ids
self.visited_ids = set()
self.visited_ids = set()
def bfs(self, source, dest) :
def bfs(self, source, dest):
# Use self.visited_ids to track visited nodes
# Use self.lookup to translate a person_id to a Person
pass

View File

@ -1,6 +1,6 @@
# 设计推特时间轴与搜索功能
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**设计 Facebook 的 feed** 与**设计 Facebook 搜索**与此为同一类型问题。
@ -74,11 +74,11 @@
* 每条推特 10 KB * 每天 5 亿条推特 * 每月 30 天
* 3 年产生新推特的内容为 5.4 PB
* 每秒需要处理 10 万次读取请求
* 每个月需要处理 2500 亿次请求 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每个月需要处理 2500 亿次请求 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每秒发布 6000 条推特
* 每月发布 150 亿条推特 * (每秒 400 次请求 / 每月 10 次请求)
* 每月发布 150 亿条推特 * (每秒 400 次请求 / 每月 10 次请求)
* 每秒推送 6 万条推特
* 每月推送 1500 亿条推特 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每月推送 1500 亿条推特 * (每秒 400 次请求 / 每月 10 亿次请求)
* 每秒 4000 次搜索请求
便利换算指南:
@ -92,7 +92,7 @@
> 列出所有重要组件以规划概要设计。
![Imgur](http://i.imgur.com/48tEA2j.png)
![Imgur](http://i.imgur.com/48tEA2j.png)
## 第三步:设计核心组件
@ -100,13 +100,13 @@
### 用例:用户发表了一篇推特
我们可以将用户自己发表的推特存储在[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
我们可以将用户自己发表的推特存储在[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)中。我们也可以讨论一下[究竟是用 SQL 还是用 NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
构建用户主页时间轴(查看关注用户的活动)以及推送推特是件麻烦事。将特推传播给所有关注者(每秒约递送 6 万条推特)这一操作有可能会使传统的[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) 超负载。因此,我们可以使用 **NoSQL 数据库**或**内存数据库**之类的更快的数据存储方式。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数>1</a></sup>
构建用户主页时间轴(查看关注用户的活动)以及推送推特是件麻烦事。将特推传播给所有关注者(每秒约递送 6 万条推特)这一操作有可能会使传统的[关系数据库](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)超负载。因此,我们可以使用 **NoSQL 数据库**或**内存数据库**之类的更快的数据存储方式。从内存读取 1 MB 连续数据大约要花 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。<sup><a href=https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数>1</a></sup>
我们可以将照片、视频之类的媒体存储于**对象存储**中。
* **客户端**向应用[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server) 的**Web 服务器**发送一条推特
* **客户端**向应用[反向代理](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)的**Web 服务器**发送一条推特
* **Web 服务器**将请求转发给**写 API**服务器
* **写 API**服务器将推特使用 **SQL 数据库**存储于用户时间轴中
* **写 API**调用**消息输出服务**,进行以下操作:
@ -130,7 +130,7 @@
新发布的推特将被存储在对应用户(关注且活跃的用户)的主页时间轴的**内存缓存**中。
我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
我们可以调用一个公共的 [REST API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
```
$ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
@ -150,16 +150,16 @@ $ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
}
```
而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
而对于服务器内部的通信,我们可以使用 [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)。
### 用例:用户浏览主页时间轴
* **客户端**向 **Web 服务器**发起一次读取主页时间轴的请求
* **Web 服务器**将请求转发给**读取 API**服务器
* **读取 API**服务器调用**时间轴服务**进行以下操作:
* 从**内存缓存**读取时间轴数据,其中包括推特 id 与用户 id - O(1)
* 通过 [multiget](http://redis.io/commands/mget) 向**推特信息服务**进行查询,以获取相关 id 推特的额外信息 - O(n)
* 通过 muiltiget 向**用户信息服务**进行查询,以获取相关 id 用户的额外信息 - O(n)
* 从**内存缓存**读取时间轴数据,其中包括推特 id 与用户 id - O(1)
* 通过 [multiget](http://redis.io/commands/mget) 向**推特信息服务**进行查询,以获取相关 id 推特的额外信息 - O(n)
* 通过 muiltiget 向**用户信息服务**进行查询,以获取相关 id 用户的额外信息 - O(n)
REST API
@ -206,8 +206,8 @@ REST API 与前面的主页时间轴类似,区别只在于取出的推特是
* 修正拼写错误
* 规范字母大小写
* 将查询转换为布尔操作
* 查询**搜索集群**(例如[Lucene](https://lucene.apache.org/) )检索结果:
* 对集群内的所有服务器进行查询,将有结果的查询进行[发散聚合Scatter gathers](https://github.com/donnemartin/system-design-primer#under-development)
* 查询**搜索集群**(例如[Lucene](https://lucene.apache.org/))检索结果:
* 对集群内的所有服务器进行查询,将有结果的查询进行[发散聚合Scatter gathers](https://github.com/donnemartin/system-design-primer#under-development)
* 合并取到的条目,进行评分与排序,最终返回结果
REST API
@ -222,7 +222,7 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
> 根据限制条件,找到并解决瓶颈。
![Imgur](http://i.imgur.com/MzExP06.png)
![Imgur](http://i.imgur.com/MzExP06.png)
**重要提示:不要从最初设计直接跳到最终设计中!**
@ -232,19 +232,19 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
我们将会介绍一些组件来完成设计,并解决架构扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及可选的替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS) ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平拓展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [反向代理web 服务器)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [关系型数据库管理系统 (RDBMS)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#关系型数据库管理系统rdbms)
* [SQL 故障主从切换](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#故障切换)
* [主从复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
**消息输出服务**有可能成为性能瓶颈。那些有着百万数量关注着的用户可能发一条推特就需要好几分钟才能完成消息输出进程。这有可能使 @回复 这种推特时出现竞争条件,因此需要根据服务时间对此推特进行重排序来降低影响。
@ -267,10 +267,10 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
高容量的写入将淹没单个的 **SQL 写主从**模式,因此需要更多的拓展技术。
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
我们也可以考虑将一些数据移至 **NoSQL 数据库**
@ -280,50 +280,50 @@ $ curl https://twitter.com/api/v1/search?query=hello+world
#### NoSQL
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 服务器内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全) 一章。
请参阅[「安全」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)一章。
### 延迟数值
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
请参阅[「每个程序员都应该知道的延迟数」](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续探讨

View File

@ -18,8 +18,8 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* **User** posts a tweet
* **Service** pushes tweets to followers, sending push notifications and emails
* **User** views the user timeline (activity from the user)
* **User** views the home timeline (activity from people the user is following)
* **User** views the user timeline (activity from the user)
* **User** views the home timeline (activity from people the user is following)
* **User** searches keywords
* **Service** has high availability
@ -74,13 +74,13 @@ Search
* 10 KB per tweet * 500 million tweets per day * 30 days per month
* 5.4 PB of new tweet content in 3 years
* 100 thousand read requests per second
* 250 billion read requests per month * (400 requests per second / 1 billion requests per month)
* 250 billion read requests per month * (400 requests per second / 1 billion requests per month)
* 6,000 tweets per second
* 15 billion tweets per month * (400 requests per second / 1 billion requests per month)
* 15 billion tweets per month * (400 requests per second / 1 billion requests per month)
* 60 thousand tweets delivered on fanout per second
* 150 billion tweets delivered on fanout per month * (400 requests per second / 1 billion requests per month)
* 150 billion tweets delivered on fanout per month * (400 requests per second / 1 billion requests per month)
* 4,000 search requests per second
* 10 billion searches per month * (400 requests per second / 1 billion requests per month)
* 10 billion searches per month * (400 requests per second / 1 billion requests per month)
Handy conversion guide:
@ -93,7 +93,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/48tEA2j.png)
![Imgur](http://i.imgur.com/48tEA2j.png)
## Step 3: Design core components
@ -101,13 +101,13 @@ Handy conversion guide:
### Use case: User posts a tweet
We could store the user's own tweets to populate the user timeline (activity from the user) in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
We could store the user's own tweets to populate the user timeline (activity from the user) in a [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
Delivering tweets and building the home timeline (activity from people the user is following) is trickier. Fanning out tweets to all followers (60 thousand tweets delivered on fanout per second) will overload a traditional [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms) . We'll probably want to choose a data store with fast writes such as a **NoSQL database** or **Memory Cache**. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
Delivering tweets and building the home timeline (activity from people the user is following) is trickier. Fanning out tweets to all followers (60 thousand tweets delivered on fanout per second) will overload a traditional [relational database](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms). We'll probably want to choose a data store with fast writes such as a **NoSQL database** or **Memory Cache**. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
We could store media such as photos or videos on an **Object Store**.
* The **Client** posts a tweet to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** posts a tweet to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Write API** server
* The **Write API** stores the tweet in the user's timeline on a **SQL database**
* The **Write API** contacts the **Fan Out Service**, which does the following:
@ -129,9 +129,9 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll
| tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta |
```
The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following) .
The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following).
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl -X POST --data '{ "user_id": "123", "auth_token": "ABC123", \
@ -151,16 +151,16 @@ Response:
}
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
### Use case: User views the home timeline
* The **Client** posts a home timeline request to the **Web Server**
* The **Web Server** forwards the request to the **Read API** server
* The **Read API** server contacts the **Timeline Service**, which does the following:
* Gets the timeline data stored in the **Memory Cache**, containing tweet ids and user ids - O(1)
* Queries the **Tweet Info Service** with a [multiget](http://redis.io/commands/mget) to obtain additional info about the tweet ids - O(n)
* Queries the **User Info Service** with a multiget to obtain additional info about the user ids - O(n)
* Gets the timeline data stored in the **Memory Cache**, containing tweet ids and user ids - O(1)
* Queries the **Tweet Info Service** with a [multiget](http://redis.io/commands/mget) to obtain additional info about the tweet ids - O(n)
* Queries the **User Info Service** with a multiget to obtain additional info about the user ids - O(n)
REST API:
@ -223,7 +223,7 @@ The response would be similar to that of the home timeline, except for tweets ma
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/jrUBAF7.png)
![Imgur](http://i.imgur.com/jrUBAF7.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -235,18 +235,18 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS) ](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [CDN](https://github.com/donnemartin/system-design-primer#content-delivery-network)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [Relational database management system (RDBMS)](https://github.com/donnemartin/system-design-primer#relational-database-management-system-rdbms)
* [SQL write master-slave failover](https://github.com/donnemartin/system-design-primer#fail-over)
* [Master-slave replication](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
The **Fanout Service** is a potential bottleneck. Twitter users with millions of followers could take several minutes to have their tweets go through the fanout process. This could lead to race conditions with @replies to the tweet, which we could mitigate by re-ordering the tweets at serve time.
@ -269,10 +269,10 @@ Although the **Memory Cache** should reduce the load on the database, it is unli
The high volume of writes would overwhelm a single **SQL Write Master-Slave**, also pointing to a need for additional scaling techniques.
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
We should also consider moving some data to a **NoSQL Database**.
@ -282,50 +282,50 @@ We should also consider moving some data to a **NoSQL Database**.
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -1,6 +1,6 @@
# 设计一个网页爬虫
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
**注意:这个文档中的链接会直接指向[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)中的有关部分,以避免重复的内容。你可以参考链接的相关内容,来了解其总的要点、方案的权衡取舍以及可选的替代方案。**
## 第一步:简述用例与约束条件
@ -67,7 +67,7 @@
> 列出所有重要组件以规划概要设计。
![Imgur](http://i.imgur.com/xjdAAUv.png)
![Imgur](http://i.imgur.com/xjdAAUv.png)
## 第三步:设计核心组件
@ -75,11 +75,11 @@
### 用例:爬虫服务抓取一系列网页
假设我们有一个初始列表 `links_to_crawl`(待抓取链接),它最初基于网站整体的知名度来排序。当然如果这个假设不合理,我们可以使用 [Yahoo](https://www.yahoo.com/) 、[DMOZ](http://www.dmoz.org/) 等知名门户网站作为种子链接来进行扩散 。
假设我们有一个初始列表 `links_to_crawl`(待抓取链接),它最初基于网站整体的知名度来排序。当然如果这个假设不合理,我们可以使用 [Yahoo](https://www.yahoo.com/)、[DMOZ](http://www.dmoz.org/) 等知名门户网站作为种子链接来进行扩散 。
我们将用表 `crawled_links` (已抓取链接 )来记录已经处理过的链接以及相应的页面签名。
我们可以将 `links_to_crawl``crawled_links` 记录在键-值型 **NoSQL 数据库**中。对于 `crawled_links` 中已排序的链接,我们可以使用 [Redis](https://redis.io/) 的有序集合来维护网页链接的排名。我们应当在 [选择 SQL 还是 NoSQL 的问题上,讨论有关使用场景以及利弊 ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
我们可以将 `links_to_crawl``crawled_links` 记录在键-值型 **NoSQL 数据库**中。对于 `crawled_links` 中已排序的链接,我们可以使用 [Redis](https://redis.io/) 的有序集合来维护网页链接的排名。我们应当在 [选择 SQL 还是 NoSQL 的问题上,讨论有关使用场景以及利弊 ](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)。
* **爬虫服务**按照以下流程循环处理每一个页面链接:
* 选取排名最靠前的待抓取链接
@ -88,7 +88,7 @@
* 这样做可以避免陷入死循环
* 继续(进入下一次循环)
* 若不存在,则抓取该链接
* 在**倒排索引服务**任务队列中,新增一个生成[倒排索引](https://en.wikipedia.org/wiki/Search_engine_indexing) 任务。
* 在**倒排索引服务**任务队列中,新增一个生成[倒排索引](https://en.wikipedia.org/wiki/Search_engine_indexing)任务。
* 在**文档服务**任务队列中,新增一个生成静态标题和摘要的任务。
* 生成页面签名
* 在 **NoSQL 数据库**的 `links_to_crawl` 中删除该链接
@ -99,33 +99,33 @@
`PagesDataStore` 是**爬虫服务**中的一个抽象类,它使用 **NoSQL 数据库**进行存储。
```python
class PagesDataStore(object) :
class PagesDataStore(object):
def __init__(self, db) ;
def __init__(self, db);
self.db = db
...
def add_link_to_crawl(self, url) :
def add_link_to_crawl(self, url):
"""将指定链接加入 `links_to_crawl`。"""
...
def remove_link_to_crawl(self, url) :
def remove_link_to_crawl(self, url):
"""从 `links_to_crawl` 中删除指定链接。"""
...
def reduce_priority_link_to_crawl(self, url)
def reduce_priority_link_to_crawl(self, url)
"""在 `links_to_crawl` 中降低一个链接的优先级以避免死循环。"""
...
def extract_max_priority_page(self) :
def extract_max_priority_page(self):
"""返回 `links_to_crawl` 中优先级最高的链接。"""
...
def insert_crawled_link(self, url, signature) :
def insert_crawled_link(self, url, signature):
"""将指定链接加入 `crawled_links`。"""
...
def crawled_similar(self, signature) :
def crawled_similar(self, signature):
"""判断待抓取页面的签名是否与某个已抓取页面的签名相似。"""
...
```
@ -133,9 +133,9 @@ class PagesDataStore(object) :
`Page` 是**爬虫服务**的一个抽象类,它封装了网页对象,由页面链接、页面内容、子链接和页面签名构成。
```python
class Page(object) :
class Page(object):
def __init__(self, url, contents, child_urls, signature) :
def __init__(self, url, contents, child_urls, signature):
self.url = url
self.contents = contents
self.child_urls = child_urls
@ -145,33 +145,33 @@ class Page(object) :
`Crawler` 是**爬虫服务**的主类,由`Page` 和 `PagesDataStore` 组成。
```python
class Crawler(object) :
class Crawler(object):
def __init__(self, data_store, reverse_index_queue, doc_index_queue) :
def __init__(self, data_store, reverse_index_queue, doc_index_queue):
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
def create_signature(self, page) :
def create_signature(self, page):
"""基于页面链接与内容生成签名。"""
...
def crawl_page(self, page) :
def crawl_page(self, page):
for url in page.child_urls:
self.data_store.add_link_to_crawl(url)
page.signature = self.create_signature(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
self.data_store.add_link_to_crawl(url)
page.signature = self.create_signature(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
def crawl(self) :
def crawl(self):
while True:
page = self.data_store.extract_max_priority_page()
page = self.data_store.extract_max_priority_page()
if page is None:
break
if self.data_store.crawled_similar(page.signature) :
self.data_store.reduce_priority_link_to_crawl(page.url)
if self.data_store.crawled_similar(page.signature):
self.data_store.reduce_priority_link_to_crawl(page.url)
else:
self.crawl_page(page)
self.crawl_page(page)
```
### 处理重复内容
@ -186,18 +186,18 @@ class Crawler(object) :
* 假设有 10 亿条数据,我们应该使用 **MapReduce** 来输出只出现 1 次的记录。
```python
class RemoveDuplicateUrls(MRJob) :
class RemoveDuplicateUrls(MRJob):
def mapper(self, _, line) :
def mapper(self, _, line):
yield line, 1
def reducer(self, key, values) :
total = sum(values)
def reducer(self, key, values):
total = sum(values)
if total == 1:
yield key, total
```
比起处理重复内容,检测重复内容更为复杂。我们可以基于网页内容生成签名,然后对比两者签名的相似度。可能会用到的算法有 [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) 以及 [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
比起处理重复内容,检测重复内容更为复杂。我们可以基于网页内容生成签名,然后对比两者签名的相似度。可能会用到的算法有 [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) 以及 [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)。
### 抓取结果更新策略
@ -209,7 +209,7 @@ class RemoveDuplicateUrls(MRJob) :
### 用例:用户输入搜索词后,可以看到相关的搜索结果列表,列表每一项都包含由网页爬虫生成的页面标题及摘要
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器) **Web 服务器**发送一个请求
* **客户端**向运行[反向代理](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)的 **Web 服务器**发送一个请求
* **Web 服务器** 发送请求到 **Query API** 服务器
* **查询 API** 服务将会做这些事情:
* 解析查询参数
@ -248,14 +248,14 @@ $ curl https://search.com/api/v1/search?query=hello+world
},
```
对于服务器内部通信,我们可以使用 [远程过程调用协议RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
对于服务器内部通信,我们可以使用 [远程过程调用协议RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
## 第四步:架构扩展
> 根据限制条件,找到并解决瓶颈。
![Imgur](http://i.imgur.com/bWxPtQA.png)
![Imgur](http://i.imgur.com/bWxPtQA.png)
**重要提示:不要直接从最初设计跳到最终设计!**
@ -265,17 +265,17 @@ $ curl https://search.com/api/v1/search?query=hello+world
我们将会介绍一些组件来完成设计,并解决架构规模扩张问题。内置的负载均衡器将不做讨论以节省篇幅。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引) 相关部分来了解其要点、方案的权衡取舍以及替代方案。
**为了避免重复讨论**,请参考[系统设计主题索引](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#系统设计主题的索引)相关部分来了解其要点、方案的权衡取舍以及替代方案。
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#nosql)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
* [DNS](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#域名系统)
* [负载均衡器](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#负载均衡器)
* [水平扩展](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#水平扩展)
* [Web 服务器(反向代理)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#反向代理web-服务器)
* [API 服务器(应用层)](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用层)
* [缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存)
* [NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#nosql)
* [一致性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#一致性模式)
* [可用性模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#可用性模式)
有些搜索词非常热门,有些则非常冷门。热门的搜索词可以通过诸如 Redis 或者 Memcached 之类的**内存缓存**来缩短响应时间,避免**倒排索引服务**以及**文档服务**过载。**内存缓存**同样适用于流量分布不均匀以及流量短时高峰问题。从内存中读取 1 MB 连续数据大约需要 250 微秒,而从 SSD 读取同样大小的数据要花费 4 倍的时间,从机械硬盘读取需要花费 80 倍以上的时间。<sup><a href="https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数">1</a></sup>
@ -284,7 +284,7 @@ $ curl https://search.com/api/v1/search?query=hello+world
* 为了处理数据大小问题以及网络请求负载,**倒排索引服务**和**文档服务**可能需要大量应用数据分片和数据复制。
* DNS 查询可能会成为瓶颈,**爬虫服务**最好专门维护一套定期更新的 DNS 查询服务。
* 借助于[连接池](https://en.wikipedia.org/wiki/Connection_pool) ,即同时维持多个开放网络连接,可以提升**爬虫服务**的性能并减少内存使用量。
* 借助于[连接池](https://en.wikipedia.org/wiki/Connection_pool),即同时维持多个开放网络连接,可以提升**爬虫服务**的性能并减少内存使用量。
* 改用 [UDP](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#用户数据报协议udp) 协议同样可以提升性能
* 网络爬虫受带宽影响较大,请确保带宽足够维持高吞吐量。
@ -294,61 +294,61 @@ $ curl https://search.com/api/v1/search?query=hello+world
### SQL 扩展模式
* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
* [读取复制](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#主从复制)
* [联合](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#联合)
* [分片](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#分片)
* [非规范化](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#非规范化)
* [SQL 调优](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-调优)
#### NoSQL
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
* [键-值存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#键-值存储)
* [文档类型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#文档类型存储)
* [列型存储](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#列型存储)
* [图数据库](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#图数据库)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#sql-还是-nosql)
### 缓存
* 在哪缓存
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* [客户端缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#客户端缓存)
* [CDN 缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#cdn-缓存)
* [Web 服务器缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#web-服务器缓存)
* [数据库缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库缓存)
* [应用缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#应用缓存)
* 什么需要缓存
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* [数据库查询级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#数据库查询级别的缓存)
* [对象级别的缓存](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#对象级别的缓存)
* 何时更新缓存
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
* [缓存模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#缓存模式)
* [直写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#直写模式)
* [回写模式](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#回写模式)
* [刷新](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#刷新)
### 异步与微服务
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
* [消息队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#消息队列)
* [任务队列](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#任务队列)
* [背压](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#背压)
* [微服务](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#微服务)
### 通信
* 可权衡选择的方案:
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
* 与客户端的外部通信 - [使用 REST 作为 HTTP API](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#表述性状态转移rest)
* 内部通信 - [RPC](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#远程过程调用协议rpc)
* [服务发现](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#服务发现)
### 安全性
请参阅[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)
请参阅[安全](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#安全)。
### 延迟数值
请参阅[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)
请参阅[每个程序员都应该知道的延迟数](https://github.com/donnemartin/system-design-primer/blob/master/README-zh-Hans.md#每个程序员都应该知道的延迟数)。
### 持续探讨

View File

@ -46,7 +46,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
* For simplicity, count changes the same as new pages
* 100 billion searches per month
Exercise the use of more traditional systems - don't use existing systems such as [solr](http://lucene.apache.org/solr/) or [nutch](http://nutch.apache.org/) .
Exercise the use of more traditional systems - don't use existing systems such as [solr](http://lucene.apache.org/solr/) or [nutch](http://nutch.apache.org/).
#### Calculate usage
@ -69,7 +69,7 @@ Handy conversion guide:
> Outline a high level design with all important components.
![Imgur](http://i.imgur.com/xjdAAUv.png)
![Imgur](http://i.imgur.com/xjdAAUv.png)
## Step 3: Design core components
@ -77,11 +77,11 @@ Handy conversion guide:
### Use case: Service crawls a list of urls
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/) , [DMOZ](http://www.dmoz.org/) , etc.
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc.
We'll use a table `crawled_links` to store processed links and their page signatures.
We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Database**. For the ranked links in `links_to_crawl`, we could use [Redis](https://redis.io/) with sorted sets to maintain a ranking of page links. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql) .
We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Database**. For the ranked links in `links_to_crawl`, we could use [Redis](https://redis.io/) with sorted sets to maintain a ranking of page links. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql).
* The **Crawler Service** processes each page link by doing the following in a loop:
* Takes the top ranked page link to crawl
@ -90,7 +90,7 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab
* This prevents us from getting into a cycle
* Continue
* Else, crawls the link
* Adds a job to the **Reverse Index Service** queue to generate a [reverse index](https://en.wikipedia.org/wiki/Search_engine_indexing)
* Adds a job to the **Reverse Index Service** queue to generate a [reverse index](https://en.wikipedia.org/wiki/Search_engine_indexing)
* Adds a job to the **Document Service** queue to generate a static title and snippet
* Generates the page signature
* Removes the link from `links_to_crawl` in the **NoSQL Database**
@ -101,33 +101,33 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab
`PagesDataStore` is an abstraction within the **Crawler Service** that uses the **NoSQL Database**:
```python
class PagesDataStore(object) :
class PagesDataStore(object):
def __init__(self, db) ;
def __init__(self, db);
self.db = db
...
def add_link_to_crawl(self, url) :
def add_link_to_crawl(self, url):
"""Add the given link to `links_to_crawl`."""
...
def remove_link_to_crawl(self, url) :
def remove_link_to_crawl(self, url):
"""Remove the given link from `links_to_crawl`."""
...
def reduce_priority_link_to_crawl(self, url)
def reduce_priority_link_to_crawl(self, url)
"""Reduce the priority of a link in `links_to_crawl` to avoid cycles."""
...
def extract_max_priority_page(self) :
def extract_max_priority_page(self):
"""Return the highest priority link in `links_to_crawl`."""
...
def insert_crawled_link(self, url, signature) :
def insert_crawled_link(self, url, signature):
"""Add the given link to `crawled_links`."""
...
def crawled_similar(self, signature) :
def crawled_similar(self, signature):
"""Determine if we've already crawled a page matching the given signature"""
...
```
@ -135,9 +135,9 @@ class PagesDataStore(object) :
`Page` is an abstraction within the **Crawler Service** that encapsulates a page, its contents, child urls, and signature:
```python
class Page(object) :
class Page(object):
def __init__(self, url, contents, child_urls, signature) :
def __init__(self, url, contents, child_urls, signature):
self.url = url
self.contents = contents
self.child_urls = child_urls
@ -147,33 +147,33 @@ class Page(object) :
`Crawler` is the main class within **Crawler Service**, composed of `Page` and `PagesDataStore`.
```python
class Crawler(object) :
class Crawler(object):
def __init__(self, data_store, reverse_index_queue, doc_index_queue) :
def __init__(self, data_store, reverse_index_queue, doc_index_queue):
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
def create_signature(self, page) :
def create_signature(self, page):
"""Create signature based on url and contents."""
...
def crawl_page(self, page) :
def crawl_page(self, page):
for url in page.child_urls:
self.data_store.add_link_to_crawl(url)
page.signature = self.create_signature(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
self.data_store.add_link_to_crawl(url)
page.signature = self.create_signature(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
def crawl(self) :
def crawl(self):
while True:
page = self.data_store.extract_max_priority_page()
page = self.data_store.extract_max_priority_page()
if page is None:
break
if self.data_store.crawled_similar(page.signature) :
self.data_store.reduce_priority_link_to_crawl(page.url)
if self.data_store.crawled_similar(page.signature):
self.data_store.reduce_priority_link_to_crawl(page.url)
else:
self.crawl_page(page)
self.crawl_page(page)
```
### Handling duplicates
@ -188,18 +188,18 @@ We'll want to remove duplicate urls:
* With 1 billion links to crawl, we could use **MapReduce** to output only entries that have a frequency of 1
```python
class RemoveDuplicateUrls(MRJob) :
class RemoveDuplicateUrls(MRJob):
def mapper(self, _, line) :
def mapper(self, _, line):
yield line, 1
def reducer(self, key, values) :
total = sum(values)
def reducer(self, key, values):
total = sum(values)
if total == 1:
yield key, total
```
Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) and [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) .
Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) and [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
### Determining when to update the crawl results
@ -211,7 +211,7 @@ We might also choose to support a `Robots.txt` file that gives webmasters contro
### Use case: User inputs a search term and sees a list of relevant pages with titles and snippets
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* The **Web Server** forwards the request to the **Query API** server
* The **Query API** server does the following:
* Parses the query
@ -224,7 +224,7 @@ We might also choose to support a `Robots.txt` file that gives webmasters contro
* The **Reverse Index Service** ranks the matching results and returns the top ones
* Uses the **Document Service** to return titles and snippets
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest) :
We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):
```
$ curl https://search.com/api/v1/search?query=hello+world
@ -250,13 +250,13 @@ Response:
},
```
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc) .
For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc).
## Step 4: Scale the design
> Identify and address bottlenecks, given the constraints.
![Imgur](http://i.imgur.com/bWxPtQA.png)
![Imgur](http://i.imgur.com/bWxPtQA.png)
**Important: Do not simply jump right into the final design from the initial design!**
@ -268,15 +268,15 @@ We'll introduce some components to complete the design and to address scalabilit
*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer#index-of-system-design-topics) for main talking points, tradeoffs, and alternatives:
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy) ](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer) ](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [NoSQL](https://github.com/donnemartin/system-design-primer#nosql)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
* [DNS](https://github.com/donnemartin/system-design-primer#domain-name-system)
* [Load balancer](https://github.com/donnemartin/system-design-primer#load-balancer)
* [Horizontal scaling](https://github.com/donnemartin/system-design-primer#horizontal-scaling)
* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer#reverse-proxy-web-server)
* [API server (application layer)](https://github.com/donnemartin/system-design-primer#application-layer)
* [Cache](https://github.com/donnemartin/system-design-primer#cache)
* [NoSQL](https://github.com/donnemartin/system-design-primer#nosql)
* [Consistency patterns](https://github.com/donnemartin/system-design-primer#consistency-patterns)
* [Availability patterns](https://github.com/donnemartin/system-design-primer#availability-patterns)
Some searches are very popular, while others are only executed once. Popular queries can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to avoid overloading the **Reverse Index Service** and **Document Service**. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know>1</a></sup>
@ -284,7 +284,7 @@ Below are a few other optimizations to the **Crawling Service**:
* To handle the data size and request load, the **Reverse Index Service** and **Document Service** will likely need to make heavy use sharding and federation.
* DNS lookup can be a bottleneck, the **Crawler Service** can keep its own DNS lookup that is refreshed periodically
* The **Crawler Service** can improve performance and reduce memory usage by keeping many open connections at a time, referred to as [connection pooling](https://en.wikipedia.org/wiki/Connection_pool)
* The **Crawler Service** can improve performance and reduce memory usage by keeping many open connections at a time, referred to as [connection pooling](https://en.wikipedia.org/wiki/Connection_pool)
* Switching to [UDP](https://github.com/donnemartin/system-design-primer#user-datagram-protocol-udp) could also boost performance
* Web crawling is bandwidth intensive, ensure there is enough bandwidth to sustain high throughput
@ -294,58 +294,58 @@ Below are a few other optimizations to the **Crawling Service**:
### SQL scaling patterns
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
* [Read replicas](https://github.com/donnemartin/system-design-primer#master-slave-replication)
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)
* [Denormalization](https://github.com/donnemartin/system-design-primer#denormalization)
* [SQL Tuning](https://github.com/donnemartin/system-design-primer#sql-tuning)
#### NoSQL
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
* [Key-value store](https://github.com/donnemartin/system-design-primer#key-value-store)
* [Document store](https://github.com/donnemartin/system-design-primer#document-store)
* [Wide column store](https://github.com/donnemartin/system-design-primer#wide-column-store)
* [Graph database](https://github.com/donnemartin/system-design-primer#graph-database)
* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer#sql-or-nosql)
### Caching
* Where to cache
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* [Client caching](https://github.com/donnemartin/system-design-primer#client-caching)
* [CDN caching](https://github.com/donnemartin/system-design-primer#cdn-caching)
* [Web server caching](https://github.com/donnemartin/system-design-primer#web-server-caching)
* [Database caching](https://github.com/donnemartin/system-design-primer#database-caching)
* [Application caching](https://github.com/donnemartin/system-design-primer#application-caching)
* What to cache
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* [Caching at the database query level](https://github.com/donnemartin/system-design-primer#caching-at-the-database-query-level)
* [Caching at the object level](https://github.com/donnemartin/system-design-primer#caching-at-the-object-level)
* When to update the cache
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back) ](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
* [Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
* [Write-through](https://github.com/donnemartin/system-design-primer#write-through)
* [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer#write-behind-write-back)
* [Refresh ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)
### Asynchronism and microservices
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
* [Message queues](https://github.com/donnemartin/system-design-primer#message-queues)
* [Task queues](https://github.com/donnemartin/system-design-primer#task-queues)
* [Back pressure](https://github.com/donnemartin/system-design-primer#back-pressure)
* [Microservices](https://github.com/donnemartin/system-design-primer#microservices)
### Communications
* Discuss tradeoffs:
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
* External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest)
* Internal communications - [RPC](https://github.com/donnemartin/system-design-primer#remote-procedure-call-rpc)
* [Service discovery](https://github.com/donnemartin/system-design-primer#service-discovery)
### Security
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security) .
Refer to the [security section](https://github.com/donnemartin/system-design-primer#security).
### Latency numbers
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know) .
See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer#latency-numbers-every-programmer-should-know).
### Ongoing

View File

@ -3,23 +3,23 @@
from mrjob.job import MRJob
class RemoveDuplicateUrls(MRJob) :
class RemoveDuplicateUrls(MRJob):
def mapper(self, _, line) :
def mapper(self, _, line):
yield line, 1
def reducer(self, key, values) :
total = sum(values)
def reducer(self, key, values):
total = sum(values)
if total == 1:
yield key, total
def steps(self) :
def steps(self):
"""Run the map and reduce steps."""
return [
self.mr(mapper=self.mapper,
reducer=self.reducer)
reducer=self.reducer)
]
if __name__ == '__main__':
RemoveDuplicateUrls.run()
RemoveDuplicateUrls.run()

View File

@ -1,73 +1,73 @@
# -*- coding: utf-8 -*-
class PagesDataStore(object) :
class PagesDataStore(object):
def __init__(self, db) :
def __init__(self, db):
self.db = db
pass
def add_link_to_crawl(self, url) :
def add_link_to_crawl(self, url):
"""Add the given link to `links_to_crawl`."""
pass
def remove_link_to_crawl(self, url) :
def remove_link_to_crawl(self, url):
"""Remove the given link from `links_to_crawl`."""
pass
def reduce_priority_link_to_crawl(self, url) :
def reduce_priority_link_to_crawl(self, url):
"""Reduce the priority of a link in `links_to_crawl` to avoid cycles."""
pass
def extract_max_priority_page(self) :
def extract_max_priority_page(self):
"""Return the highest priority link in `links_to_crawl`."""
pass
def insert_crawled_link(self, url, signature) :
def insert_crawled_link(self, url, signature):
"""Add the given link to `crawled_links`."""
pass
def crawled_similar(self, signature) :
def crawled_similar(self, signature):
"""Determine if we've already crawled a page matching the given signature"""
pass
class Page(object) :
class Page(object):
def __init__(self, url, contents, child_urls) :
def __init__(self, url, contents, child_urls):
self.url = url
self.contents = contents
self.child_urls = child_urls
self.signature = self.create_signature()
self.signature = self.create_signature()
def create_signature(self) :
def create_signature(self):
# Create signature based on url and contents
pass
class Crawler(object) :
class Crawler(object):
def __init__(self, pages, data_store, reverse_index_queue, doc_index_queue) :
def __init__(self, pages, data_store, reverse_index_queue, doc_index_queue):
self.pages = pages
self.data_store = data_store
self.reverse_index_queue = reverse_index_queue
self.doc_index_queue = doc_index_queue
def crawl_page(self, page) :
def crawl_page(self, page):
for url in page.child_urls:
self.data_store.add_link_to_crawl(url)
self.reverse_index_queue.generate(page)
self.doc_index_queue.generate(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
self.data_store.add_link_to_crawl(url)
self.reverse_index_queue.generate(page)
self.doc_index_queue.generate(page)
self.data_store.remove_link_to_crawl(page.url)
self.data_store.insert_crawled_link(page.url, page.signature)
def crawl(self) :
def crawl(self):
while True:
page = self.data_store.extract_max_priority_page()
page = self.data_store.extract_max_priority_page()
if page is None:
break
if self.data_store.crawled_similar(page.signature) :
self.data_store.reduce_priority_link_to_crawl(page.url)
if self.data_store.crawled_similar(page.signature):
self.data_store.reduce_priority_link_to_crawl(page.url)
else:
self.crawl_page(page)
page = self.data_store.extract_max_priority_page()
self.crawl_page(page)
page = self.data_store.extract_max_priority_page()