Selain wawancara pemrograman, perancangan sistem adalah salah satu **komponen yang diwajibkan** dari dari **proses wawancara teknis** di banyak perusahaan teknologi.
[Bungkusan kartu kilat Anki](https://apps.ankiweb.net/) yang disediakan menggunakan perulangan berjeda untuk membantu menguasai konsep-konsep kunci perancangan sistem.
Silakan periksa repositori [Tantangan Pemrograman Interaktif](https://github.com/donnemartin/interactive-coding-challenges) yang berisi tambahan bungkusan Anki:
* **Garis waktu pendek** - Bidik topik-topik perancangan sistem secara luas. Latih dengan cara menjawab beberapa pertanyaan wawancara.
* **Garis waktu sedang** - Bidik topik-topik perancangan sistem secara luas dan perdalam dibeberapa bagian tertentu. Latih dengan cara menjawab banyak pertanyaan wawancara.
* **Garis waktu panjang** - Bidik topik-topik perancangan sistem secara laus dan mendalam. Latih dengan cara menyelesaikan seluruh pertanyaan wawancara.
| Baca sampai habis [Indeks topik perancangan sistem](#indeks-topik-perancangan-sistem) untuk pemahaman secara luas bagaimana cara kerja suatu sistem | :+1: | :+1: | :+1: |
| Baca sampai habis beberapa artikel di [Blog teknik perusahaan](#blog-teknik-perusahaan) untuk perusahaan tempat Anda wawancara | :+1: | :+1: | :+1: |
| Baca sampai habis beberapa [Arsitektur dunia nyata](#arsitektur-dunia-nyata) | :+1: | :+1: | :+1: |
| Tinjau [Pertanyaan wawancara perancangan sistem beserta solusinya](#pertanyaan-wawancara-perancangan-sistem-beserta-solusinya) | Some | Many | Most |
| Teliti [Pertanyaan wawancara perancangan berbasis objek beserta solusinya](#pertanyaan-wawancara-perancangan-berbasis-objek-beserta-solusinya) | Some | Many | Most |
| Periksa [Tambahan pertanyaan wawancara perancangan sistem](#tambahan-pertanyaan-wawancara-perancangan-sistem) | Some | Many | Most |
Untuk memperkuat proses diskusi, ulas bagian [Pertanyaan wawancara perancangan sistem beserta solusinya](#pertanyaan-wawancara-rancangan-sistem-beserta-solusinya) menggunakan langkah-langkah berikut.
Pertama-tama, kita perlu memahami prinsip-prinsip umum, belajar apa saja prinsip-prinsip tersebut, bagaimana penggunaannya, dan kelebihan dan kekurangannya.
Sebuah layanan disebut terskala jika layanan tersebut menghasilkan peningkatan kinerja secara proposional terhadap pertambahan sumber daya.
Secara umum, peningkatan kinerja berarti pertambahan unit kerja yang bisa diselesaikan.
kemungkinan lainnya adalah kemampuan menangani unit kerja ukurannya yang lebih besar, contohnya ketika himpunan data berkembang.<sup><ahref=http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html>1</a></sup>
AP adalah kompromi yang baik jika kebutuhan bisnis mengijinkan untuk [konsistensi yang mungkin terjadi](#konsistensi-yang-mungkin-terjadi-eventual-consistency) atau ketika sistem perlu tetap bekerja walaupun ada kesalahan pihak luar.
Pendekatan ini bisa dilihat pada sistem contohnya Memcached.
Konsistensi lemah bekerja dengan baik pada sistem kasus penggunaan waktu nyata contohnya VoIP, obrolan video, dan permainan banyak pemain waktu nyata.
Sebagai contoh, jika kita dalam panggilan telpon dan kehilangan sinyal untuk beberapa detik, kita tidak akan mendengar pembicaraan yang terjadi ketika koneksi terputus.
* Fail-over meningkatkan jumlah perangkat keras yang dibutuhkan dan kompleksitas.
* Ada potensi kehilangan data ketika sistem aktif gagal pada ada data terbaru yang sudah berhasil ditulis di server aktif tetapi belum direplikasi ke server pasif.
Jika suatu layanan terdiri dari beberapa komponen yang rentan mengalami kegagalan, ketersediaan layanan secara keseluruhan tergantung apakah komponen tersebut sejajar atau berurutan.
Jika `Foo` dan `Bar` keduanya masing-masing memiliki tingkat ketersediaan sebesar 99.9%, maka total tingkat ketersediaan sejajar keduanya adalah 99.9999%.
DNS bersifat hierarki, dengan beberapa server berkuasa di level puncak.
Router atau ISP yang kita gunakan menyediakan informasi mengenai server DNS yang dihubungi ketika melakukan pencarian.
Server DNS tingkat lebih rendah menyinggahkan pemetaan yang mungkin tidak mutakhir karena penundaan perambatan DNS.
Hasil DNS bisa juga disinggahkan oleh peramban atau sistem operasi selama periode tertentu yang ditentukan oleh [masa berlaku DNS](https://en.wikipedia.org/wiki/Time_to_live).
* **NS record (name server)** - Menentukan server DNS untuk domain/subdomain tersebut.
* **MX record (mail exchange)** - Menentukan server email untuk penerimaan pesan.
* **A record (address)** - Mengarahkan sebuah nama ke alamat IP.
* **CNAME (canonical)** - Mengarahkan sebuah nama ke nama lain. Nama lain tersebut bisa berupa `CNAME` atau `A` (Contohnya example.com diarahkan ke www.example.com).
* Pengaksesan server DNS menambahkan sedikit penundaan, walaupun sudah diperingan menggunakan singgahan seperti penjelasan di atas.
* Pengelolaan server DNS bisa jadi rumit dan umumnya dikelola oleh [pemerintah, penyedia jasa internet, dan perusahaan besar](http://superuser.com/questions/472695/who-controls-the-dns-servers/472729).
* Layanan DNS belakangan ini mengalami [serangan DDoS](http://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/), preventing users from accessing websites such as Twitter without knowing Twitter's IP address(es).
Content delivery network (CDN) adalah jaringan server proksi yang tersebar secara global, menyuguhkan konten dari lokasi terdekat ke pengguna.
Umumnya, berkas statis seperti HTML/CSS/JS, foto, dan video disuguhkan oleh CDN, meskipun beberapa CDN seperti Amazon CloudFront mendukung konten dinamis.
Resolusi DNS situs akan memberitahu pengguna server mana yang dihubungi.
[Masa berlaku](https://en.wikipedia.org/wiki/Time_to_live) menentukan berapa lama konten akan singgah.
CDN tarik meminimalkan ruang penyimpanan pada CDN, tetapi bisa menciptakan lalu lintas yang berlebihan jika berkas kedaluarsa dan ditarik sebelum berkas tersebut diubah.
Situs dengan lalu lintas padat bekerja baik dengan CDN tarik, sebagaimana lalu lintas tersebar secara merata dengan konten yang baru diminta saja yang tersimpan di CDN.
* **Terminasi SSL** - Mendekripsi permintaan yang datang dan mengenkripsi tanggapan server sehingga server bagian belakang tidak perlu melakukan operasi yang berpotensi mahal ini
* Menghilangkan kebutuhan menginstall [sertifikat x.509](https://en.wikipedia.org/wiki/X.509) pada setiap server
* **Persistensi sesi** - Mengeluarkan kuki dan mengarahkan permintaan klien yang spesifik ke server yang sama jika aplikasi web tidak melakukan pelacakan sesi
Untuk menjaga dari kegagalan, biasa dilakukan pendirian penyeimbang beban berganda, baik itu mode [aktif-pasif](#aktif-pasif) atau [aktif-aktif](#aktif-aktif).
Penyeimbang beban lapisan ke-4 menggunakan informasi pada [lapisan transportasi](#komunikasi) untuk menentukan bagaimana cara mendistribusikan permintaan.
Umumnya, proses ini melibatkan asal alamat ip, tujuan alamat ip dan porta pada tajuk (header), tetapi bukan isi paket.
Penyeimbang beban lapisan ke-4 meneruskan paket jaringan dari dan ke server hulu dengan melakukan [translasi alamat jaringan](https://www.nginx.com/resources/glossary/layer-4-load-balancing/).
Penyeimbang beban lapisan ke-7 menggunakan informasi pada [lapisan aplikasi](#komunikasi) untuk menentukan cara mendistribusikan permintaan.
Informasi yang dapat dilibat adalah tajuk, pesan, dan kuki.
Penyeimbang beban lapisan ke-7 mengakhiri lalu lintas jaringan, membaca pesan, membuat keputusan penyeimbangan beban, kemudian membuka koneksi ke server terpilih.
Sebagai contoh, penyeimbang beban lapisan ke-7 bisa mengarahkan lalu lintas video menuju server yang menginangi video tersebut sembari mengarahkan lalu lintas tagihan pengguna yang lebih sensitif ke server yang keamanannya sudah diperketat.
Layer 7 load balancers look at the [application layer](#communication) to decide how to distribute requests.
Dengan mengorbankan fleksibilitas, penyeimbang beban lapisan ke-4 memerlukan waktu dan sumber daya komputasi yang lebih sedikit dibandingkan dengan penyeimbang beban lapisan ke-7, walaupun dampak kinerjanya bisa sangat minim pada perangkat keras komoditas modern.
Penyeimbang beban bisa juga membantu penyekalaan horizontal untuk meningkatkan kinerja dan ketersediaan.
Penyekalaan keluar menggunakan mesin komoditas lebih efisien dari segi biaya dan menghasilkan ketersediaan yang lebih tinggi dibandingkan dengan penyekalaan ke atas server.
Penyekalaan ke atas sebuah server individu menggunakan perangkat keras yang lebih mahal disebut dengan penyekalaan vertikal.
Mencari pekerja yang bekerja pada perangkat lunak komoditas juga lebih mudah dibandingkan mencari pekerja untuk sistem firma terspesialisasi.
* Penyekalaan secara horizontal meningkatkan kompleksitas dan melibatkan pengklonaan server
* Server seharusnya nirkeadaan: server seharusnya tidak mengandung data apapun yang berhubungan dengan pengguna seperti sesi dan profil pengguna
* Sesi dapat disimpan pada penyimpanan data terpusat seperti [basis data](#basis-data) (SQL, NoSQL) atau [singgahan](#singgahan) persisten (Redis, Memcached)
* Server hilir seperti singgahan dan basis data perlu menangani lebih banyak koneksi secara simultan ketika terjadi penyekalaan keluar server hulu
* Penyeimbang beban bisa menjadi titik macet kinerja jika penyeimbang beban tidak memiliki sumber daya yang cukup atau tidak dikonfigurasi dengan benar.
* Menambahkan penyeimbang beban untuk menghilangkan titik kegagalan akan meningkatkan kompleksitas.
* Penyeimbang beban tunggal adalah titik kegagalan. Konfigurasi penyeimbang beban ganda membuat sistem lebih kompleks lagi.
Proksi terbalik adalah server web yang memusatkan layanan-layanan internal dan menyediakan antarmuka terpadu ke publik.
Permintaan dari klien diteruskan ke salah satu server yang mampu memenuhi permintaan tersebut sebelum proksi terbalik mengembalikan tanggapan server tersebut ke klien.
* **Meningkatkan keamanan** - Menyembunyikan informasi mengenai server bagian belakang, memasukan alamat IP ke dalam daftar hitam, dan membatasi jumlah koneksi per klien
* **Meningkatkan skalabilitas dan fleksibilitas** - Klien hanya melihat alamat IP proksi terbalik sehingga memungkinkan kita untuk menyekalakan server atau mengganti konfigurasi
* **Terminasi SSL** - Mendekripsi permintaan yang datang dan mengenkripsi tanggapan server sehingga server bagian belakang tidak perlu melakukan operasi yang berpotensi mahal ini
* Menghapus keperluan untuk menginstall [sertifikat x.509](https://en.wikipedia.org/wiki/X.509) di setiap server
* **Kompresi** - Memampatkan tanggapan server
* **Singgahan** - Mengembalikan tanggapan berdasarkan permintaan yang ada disinggahan
* **Konten statis** - Melayani konten statis secara langsung
* Penggelaran penyeimbang beban berguna ketika kita mempunyai beberapa server. Seringkali, penyeimbang beban mengarahkan lalu lintas ke sekumpulan server yang melayani fungsi yang sama.
* Proksi terbalik bisa berguna bahkan untuk satu server web atau satu server aplikasi sehingga membuka manfaat yang dijelaskan pada bagian sebelumnya.
* Solusi seperti NGINX dan HAProxy mendukung pemproxyan terbalik dan penyeimbangan beban lapisan ke-7.
* Menambahkan proksi terbalik meningkatkan kompleksitas.
* Proksi terbalik tunggal adalah titik kegagalan. Pengkonfigurasian proksi terbalik ganda (Contohnya [failover](https://en.wikipedia.org/wiki/Failover)) meningkatkan kompleksitas lagi.
<i><ahref=http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer>Sumber: Pengantar pengarsitekan sistem secara terskala</a></i>
Pemisahan antara lapisan web dengan lapisan aplikasi (dikenal juga dengan istilah lapisan platform) memungkinkan kita untuk menyekala dan mengkonfigurasi kedua lapisan secara independen.
Penambahan API baru menghasilkan penambahan server aplikasi tanpa perlu penambahan server web.
Prinsip tanggung jawab tunggal menganjurkan untuk layanan yang kecil dan mandiri yang bekerja secara bersama.
Tim kecil dengan layanan kecil bisa merencanakan pertumbuhan yang cepat secara lebih agresif.
Sehubungan dengan diskusi ini adalah [microservices](https://en.wikipedia.org/wiki/Microservices) dimana dapat digambarkan sebagai sekumpulan layanan yang kecil, modular, dan dapat digelar secara independen.
Setiap layanan menjalankan proses yang unik dan berkomunikasi melalui mekanisme ringan dan sudah terdefinisi dengan baik untuk melayani tujuan bisnis.<sup><ahref=https://smartbear.com/learn/api-design/what-are-microservices>1</a></sup>
Sistem seperti [Consul](https://www.consul.io/docs/index.html), [Etcd](https://coreos.com/etcd/docs/latest), dan [Zookeeper](http://www.slideshare.net/sauravhaloi/introduction-to-apache-zookeeper) bisa membantu layanan untuk saling menemukan dengan cara melacak nama yang terdaftar, alamat, dan porta.
[Pemeriksaan kesehatan](https://www.consul.io/intro/getting-started/checks.html) membantu menguji integritas layanan dan seringkali dilakukan menggunakan titik akhir [HTTP](#hypertext-transfer-protocol-http).
Baik Consul dan Etcd keduanya memiliki [gudang kunci-nilai](#gudang-kunci-nilai) bawaan yang berguna untuk menyimpan nilai konfigurasi dan data bersama lainnya.
* Penambahan lapisan aplikasi dengan layanan hubungan renggang memerlukan pendekatan yang berbeda dari sudut pandang arsitektur, operasi, dan proses dibandingkan dengan sistem monolitik.
* Layanan mikro bisa meningkatkan kompleksitas dalam aspek penggelaran dan operasi.
Ada banyak teknik untuk menyekala basis data relasional: **replikasi master-slave**, **replikasi master-master**, **federasi**, **sharding**, **denormalization**, dan **penyetelan SQL**.
Master melayani operasi baca dan tulis, mereplikasi tulisan ke slave lainnya dimana slave hanya melayani operasi baca.
Slave bisa juga mereplikasi data ke slave lainnya membentuk mode seperti pohon.
Jika master luring, sistem tetap bisa beroperasi dalam mode hanya baca sampai salah satu slave dipromosikan menjadi master atau master yang baru diadakan.
* Kita akan memerlukan penyeimbang beban atau membuat perubahan pada logik aplikasi untuk menentukan kemana operasi tulis diarahkan.
* Kebanyakan sistem master-master memiliki konsistensi yang longgar (melanggar prinsip ACID) atau mengalami peningkatan jeda operasi tulis karena ada sinkronisasi.
* Resolusi konflik lebih berperan karena lebih banyak simpul yang ditambahkan dan peningkatan latensi.
* Lihat [Kekurangan: replikasi](#kekurangan-replikasi) untuk hal yang berhubungan dengan master-slave dan master-master.
* Ada potensi kehilangan data jika master gagal sebelum data yang baru tertulis direplikasi ke simpul yang lain.
* Tulisan diputar ulang pada replika baca. Jika ada banyak tulisan, replika baca bisa macet ketika menulis ulang tulisan dan tidak mampu melakukan banyak operasi baca.
* Semakin banyak slave baca, semakin banyak pula yang perlu direplikasi sehingga meningkatkan jeda replikasi.
* Pada beberapa sistem, menulis ke master bisa menghidupkan beberapa ulir (thread) untuk menulis secara paralel, sedangkan replika baca hanya mendukung menulis secara berurutan menggunakan satu ulir.
* Replikasi meningkatkan kebutuhan perangkat keras dan tambahan kompleksitas.
Federasi (atau penyekatan fungsional) membagi basis data berdasarkan fungsi.
Sebagai contoh, alih-alih basis data tunggal yang monolitik, kita bisa memiliki tiga basis data: **forum**, **pengguna**, dan **produk**.
Federasi mengurangi lalu lintas baca dan tulis ke setiap basis data dan oleh karena itu mengurangi jeda replikasi.
Basis data yang lebih kecil memungkinkan lebih banyak data yang bisa dimuat di dalam memori sehingga meningkatkan popularitas singgahan dikarenakan kualitas lokalitasnya.
Tanpa master terpusat tunggal yang menyambungkan tulisan sehingga kita bisa melakukan operasi tulis secara paralel yang berdampak meningkatnya lewatan.
* Federasi tidak efektif jika skema kita memerlukan fungsi atau tabel dalam jumlah yang sangat besar.
* Kita perlu memperbarui logik aplikasi untuk menentukan basis data mana untuk operasi baca dan tulis.
* Menggabungkan data dari dua basis data menjadi lebih kompleks ketika menggunakan [hubungan server](http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers).
* Federasi meningkatkan jumlah perangkat keras dan kompleksitas.
Serupa dengan manfaat dari (federasi)[#federasi], pecahan mengurangi lalu lintas baca dan tulis, mengurangi replikasi, dan meningkatkan popularitas singgahan.
Ukuran indeks juga berkurang yang secara umum meningkatkan kinerja dengan kueri yang lebih cepat.
Jika salah satu pecahan mati, pecahan yang lain masih beroperasi, meski kita ingin menambahkan beberapa bentuk replikasi untuk mencegah kehilangan data.
Seperti federasi, tidak ada master terpusat tunggal yang menyambungkan tulisan sehingga kita bisa melakukan operasi tulis secara paralel yang berdampak meningkatnya lewatan.
* Kita perlu memperbarui logik aplikasi untuk bekerja dengan pecahan yang dapat menyebabkan kueri yang lebih kompleks.
* Distribusi data bisa menjadi tidak seimbang di dalam pecahan. Contohnya, sekumpulan pengguna dengan kuasa penuh pada suatu pecahan bisa menghasilkan beban berlebih dibandingkan pecahan lain.
* Penyeimbangan ulang menambah kompleksitas. Pecahan yang berfungsi berdasarkan [consistent hashing](http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html) bisa mengurangi jumlah data yang dikirim.
* Menggabungkan data dari berbagai pecahan menjadi lebih kompleks.
* Pecahan memerlukan lebih banyak perangkat keras dan menambah kompleksitas.
Denormalisasi mencoba memperbaiki kinerja operasi baca dengan mengorbankan operasi tulis.
Salinan data yang berlebihan ditulis di beberapa tabel untuk menghindari operasi penggabungan yang mahal.
Beberapa RDBMS seperti [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) dan Oracle mendukung [materialized views](https://en.wikipedia.org/wiki/Materialized_view) yang menangani penyimpanan informasi yang berlebih dan memastikan salinan tersebut konsisten.
Ketika data menjadi terdistribusi dengan teknik seperti [federasi](#federasi) dan [pecahan](#pecahan), pengelolaan penggabungan data lintas pusat data meningkatkan kompleksitas lebih tinggi lagi.
Denormalisasi bisa jadi menghindari kebutuhan penggabungan yang bersifat kompleks.
Pada kebanyakan sistem, operasi baca bisa jauh mengalahkan operasi tulis dengan rasio 100:1 bahkan 1000:1.
Operasi baca yang memerlukan penggabungan basis data yang kompleks bisa menjadi sangat mahal biayanya dan menghabiskan banyak waktu untuk operasi disk.
* Batasan (constraint) bisa membantu salinan informasi yang berlebihan untuk tetap sinkron yang berdampak pada meningkatnya kompleksitas rancangan basis data.
Penyetelah SQL merupakan topik yang luas dan banyak [buku](https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=sql+tuning) yang sudah dituliskan sebagai referensi.
* **Tolok ukur** - Menyimulasi situasi beban tinggi dengan alat seperti [ab](http://httpd.apache.org/docs/2.2/programs/ab.html).
* **Profil** - Memungkinkan alat seperti [slow query log](http://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) membantu melacak masalah kinerja.
* MySQL menulis data ke disk pada blok yang berdekatan untuk mempercepat akses.
* Gunakan `CHAR` dari pada `VARCHAR` untuk bidang dengan panjang tetap.
*`CHAR` memungkinkan akses acak yang cepat secara efektif sedangkan pada `VARCHAR` kita harus menemukan ujung sebuah string sebelum pindah ke string selanjutnya.
* Gunakan `TEXT` untuk teks dalam blok besar seperti tulisan blog. `TEXT` juga memungkinkan untuk pencarian biner. Penggunaan `TEXT` menghasilkan penyimpanan penunjuk pada disk yang berguna untuk menemukan blok teks selanjutnya.
* Gunakan `INT` untuk menyimpan jumlah besar sampai dengan 2^32 atau 4 miliar.
* Gunakan `DECIMAL` untuk mata uang sehingga terhindar dari kesalahan representasi _floating point_.
* Hindari menyimpan `BLOBS` berukuran besar, simpan lokasi dimana objek tersebut berada.
*`VARCHAR(255)` adalah jumlah karakter terbesar yang bisa dihitung dalam angka 8 bit, seringkali memaksimalkan penggunaan byte pada beberapa RDBMS.
* Atur batasan `NOT NULL` ditempat yang relevan untuk [meningkatkan kinerja pencarian](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search).
* Kolom yang kita kueri (`SELECT`, `GROUP BY`, `ORDER BY`, `JOIN`) menjadi lebih cepat dengan indeks.
* Indeks umumnya digambarkan sebagai penyeimbang diri [B-tree](https://en.wikipedia.org/wiki/B-tree) yang menjaga data terurut dan memungkinkan pencarian, akses berurut, penambahan, dan penghapusan dalam waktu yang logaritmik.
* Penempatan pada indeks memungkinkan data disimpan pada memori sehingga membutuhkan ruang lebih.
* Operasi tulis bisa menjadi lebih lambat karena indeks juga perlu diperbarui.
* Ketika memuat data dalam jumlah besar, mungkin akan lebih cepat ketika indeks dinonaktifkan, muat data, kemudian bangun kembali indeks.
* Pada beberapa kasus, [singgahan kueri](https://dev.mysql.com/doc/refman/5.7/en/query-cache.html) dapat menyebabkan [masalah kinerja](https://www.percona.com/blog/2016/10/12/mysql-5-7-performance-tuning-immediately-after-installation/).
* [Kiat untuk mengoptimasi kueri MySQL](http://aiddroid.com/10-tips-optimizing-mysql-queries-dont-suck/)
* [Apakah ada alasan yang baik dibalik seringnya digunakan VARCHAR(255)?](http://stackoverflow.com/questions/1217466/is-there-a-good-reason-i-see-varchar255-used-so-often-as-opposed-to-another-l)
* [Bagaimana nilai null mempengaruhi kinerja?](http://stackoverflow.com/questions/1017239/how-do-null-values-affect-performance-in-a-database-search)
NoSQL adalah sekumpulan butir data yang diwakili pada **gudang kunci-nilai**, **gudang dokumen**, **gudang kolom lebar**, atau **basis data graf**.
Data di denormalisasi dan penggabungan umumnya dilakukan pada kode aplikasi.
Kebanyakan gudang NoSQL tidak memiliki transaksi ACID dan memilih [konsistensi yang mungkin terjadi](#konsistensi-yang-mungkin-terjadi-eventual-consistency).
* **Basically available** - Sistem menjamin ketersediaan.
* **Soft state** - Keadaan sistem bisa berubah seiring waktu, bahkan tanpa ada masukan.
* **Eventual consistency** - Sistem akan menjadi konsisten selama periode waktu tertentu mengingat bahwa sistem tidak menerima masukan selama periode tersebut.
Sebagai tambahan dalam pemilihan antara [SQL atau NoSQL](#sql-atau-nosql), pemahaman akan tipe basis data NoSQL yang sesuai dengan kasus penggunan kita menjadi penting.
Kita akan bahas **gudang tanda-kunci**, **gudang dokumen**, **gudang kolom lebar**, dan **basis data graf** di bagian selanjutnya.
Gudang kunci-nilai umumnya memungkinkan operasi baca dan tulis dengan kompleksitas waktu O(1) dan umumnya didukung oleh memori atau SSD.
Gudang data mampu mengelola kunci menurut [urutan leksikografis](https://en.wikipedia.org/wiki/Lexicographical_order) yang memungkinkan pengambilan rentang kunci yang efisien.
Gudang kunci-nilai mengizinkan penyimpanan meta data beserta dengan nilai.
Gudang kunci-nilai menyediakan kinerja tinggi dan sering kali digunakan untuk model data sederhana atau data yang berubah dengan cepat seperti lapisan singgah dalam memori.
Karena Gudan kunci-nilai hanya menawarkan kumpulan operasi yang terbatas, kompleksitas dipindahkan ke lapisan aplikasi jika operasi tambahan diperlukan.
Gudang dokumen terpusat pada dokumen (XML, JSON, biner, dan lainnya) dimana dokumen menyimapan semua informasi untuk objek tertentu.
Gudang dokumen menyediakan API atau bahasa kueri untuk mengkueri berdasarkan struktur internal dari dokumen itu sendiri.
*Menjadi catatan bahwa banyak gudang kunci-nilai menyertakan fitur untuk bekerja dengan metadata suatu nilai sehingga mengaburkan batasan antara kedua jenis gudang.*
Beberapa gudang dokumen seperti [MongoDB](https://www.mongodb.com/mongodb-architecture) dan [CouchDB](https://blog.couchdb.org/2016/08/01/couchdb-2-0-architecture/) menyediakan bahasa seperti SQL untuk melakukan kueri yang kompleks.
[DynamoDB](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) mendukung kunci-nilai dan dokumen.
Google mengenalkan [Bigtable](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) sebagai gudang kolom lebar pertama.
Rancangan Bigtable mempengaruhi [HBase](https://www.mapr.com/blog/in-depth-look-hbase-architecture) yang sering digunakan pada ekosistem Hadoop dan [Cassandra](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archIntro.html) dari Facebook.
Gudang seperti BigTable, HBase, dan Cassandra memelihara kunci dengan urutan leksikografi yang memungkinkan pengambilan rentang kunci secara selektif.
* [Panduan keputusan dan survei basis data NoSQL](https://medium.com/baqend-blog/nosql-databases-a-survey-and-decision-guidance-ea7823a822d#.wskogqenq)
<i><ahref=http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html>Source: Scalable system design patterns</a></i>
</p>
Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
Databases often benefit from a uniform distribution of reads and writes across its partitions. Popular items can skew the distribution, causing bottlenecks. Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
### Client caching
Caches can be located on the client side (OS or browser), [server side](#reverse-proxy-web-server), or in a distinct cache layer.
### CDN caching
[CDNs](#content-delivery-network) are considered a type of cache.
### Web server caching
[Reverse proxies](#reverse-proxy-web-server) and caches such as [Varnish](https://www.varnish-cache.org/) can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
### Database caching
Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
### Application caching
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms) algorithms such as [least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
* Persistence option
* Built-in data structures such as sorted sets and lists
There are multiple levels you can cache that fall into two general categories: **database queries** and **objects**:
* Row level
* Query-level
* Fully-formed serializable objects
* Fully-rendered HTML
Generally, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
### Caching at the database query level
Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues:
* Hard to delete a cached result with complex queries
* If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
### Caching at the object level
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s):
* Remove the object from cache if its underlying data has changed
* Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
Suggestions of what to cache:
* User sessions
* Fully rendered web pages
* Activity streams
* User graph data
### When to update the cache
Since you can only store a limited amount of data in cache, you'll need to determine which cache update strategy works best for your use case.
#### Cache-aside
<palign="center">
<imgsrc="http://i.imgur.com/ONjORqk.png"/>
<br/>
<i><ahref=http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast>Source: From cache to in-memory data grid</a></i>
</p>
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
* Look for entry in cache, resulting in a cache miss
* Load entry from the database
* Add entry to cache
* Return entry
```python
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
```
[Memcached](https://memcached.org/) is generally used in this manner.
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
##### Disadvantage(s): cache-aside
* Each cache miss results in three trips, which can cause a noticeable delay.
* Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
* When a node fails, it is replaced by a new, empty node, increasing latency.
The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database:
* Application adds/updates entry in cache
* Cache synchronously writes entry to data store
* Return
Application code:
```python
set_user(12345, {"foo":"bar"})
```
Cache code:
```python
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
```
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
##### Disadvantage(s): write through
* When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue.
* Most data written might never be read, which can be minimized with a TTL.
In write-behind, the application does the following:
* Add/update entry in cache
* Asynchronously write entry to the data store, improving write performance
##### Disadvantage(s): write-behind
* There could be data loss if the cache goes down prior to its contents hitting the data store.
* It is more complex to implement write-behind than it is to implement cache-aside or write-through.
#### Refresh-ahead
<palign="center">
<imgsrc="http://i.imgur.com/kxtjqgE.png"/>
<br/>
<i><ahref=http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast>Source: From cache to in-memory data grid</a></i>
</p>
You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration.
Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
##### Disadvantage(s): refresh-ahead
* Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
### Disadvantage(s): cache
* Need to maintain consistency between caches and the source of truth such as the database through [cache invalidation](https://en.wikipedia.org/wiki/Cache_algorithms).
* Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
* Need to make application changes such as adding Redis or memcached.
### Source(s) and further reading
* [From cache to in-memory data grid](http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast)
* [Scalable system design patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html)
* [Introduction to architecting systems for scale](http://lethain.com/introduction-to-architecting-systems-for-scale/)
<i><ahref=http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer>Source: Intro to architecting systems for scale</a></i>
</p>
Asynchronous workflows help reduce request times for expensive operations that would otherwise be performed in-line. They can also help by doing time-consuming work in advance, such as periodic aggregation of data.
### Message queues
Message queues receive, hold, and deliver messages. If an operation is too slow to perform inline, you can use a message queue with the following workflow:
* An application publishes a job to the queue, then notifies the user of job status
* A worker picks up the job from the queue, processes it, then signals the job is complete
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
**[Redis](https://redis.io/)** is useful as a simple message broker but messages can be lost.
**[RabbitMQ](https://www.rabbitmq.com/)** is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
**[Amazon SQS](https://aws.amazon.com/sqs/)** is hosted but can have high latency and has the possibility of messages being delivered twice.
### Task queues
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
**Celery** has support for scheduling and primarily has python support.
### Back pressure
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff).
### Disadvantage(s): asynchronism
* Use cases such as inexpensive calculations and realtime workflows might be better suited for synchronous operations, as introducing queues can add delays and complexity.
### Source(s) and further reading
* [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
* [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
* [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)
<i><ahref=http://www.escotal.com/osilayer.html>Source: OSI 7 layer model</a></i>
</p>
### Hypertext transfer protocol (HTTP)
HTTP is a method for encoding and transporting data between a client and a server. It is a request/response protocol: clients issue requests and servers issue responses with relevant content and completion status info about the request. HTTP is self-contained, allowing requests and responses to flow through many intermediate routers and servers that perform load balancing, caching, encryption, and compression.
A basic HTTP request consists of a verb (method) and a resource (endpoint). Below are common HTTP verbs:
| POST | Creates a resource or trigger a process that handles data | No | No | Yes if response contains freshness info |
| PUT | Creates or replace a resource | Yes | No | No |
| PATCH | Partially updates a resource | No | No | Yes if response contains freshness info |
| DELETE | Deletes a resource | Yes | No | No |
*Can be called many times without different outcomes.
HTTP is an application layer protocol relying on lower-level protocols such as **TCP** and **UDP**.
#### Source(s) and further reading: HTTP
* [What is HTTP?](https://www.nginx.com/resources/glossary/http/)
* [Difference between HTTP and TCP](https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol)
* [Difference between PUT and PATCH](https://laracasts.com/discuss/channels/general-discussion/whats-the-differences-between-put-and-patch?page=1)
### Transmission control protocol (TCP)
<palign="center">
<imgsrc="http://i.imgur.com/JdAsdvG.jpg"/>
<br/>
<i><ahref=http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/>Source: How to make a multiplayer game</a></i>
</p>
TCP is a connection-oriented protocol over an [IP network](https://en.wikipedia.org/wiki/Internet_Protocol). Connection is established and terminated using a [handshake](https://en.wikipedia.org/wiki/Handshaking). All packets sent are guaranteed to reach the destination in the original order and without corruption through:
* Sequence numbers and [checksum fields](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_computation) for each packet
* [Acknowledgement](https://en.wikipedia.org/wiki/Acknowledgement_(data_networks)) packets and automatic retransmission
If the sender does not receive a correct response, it will resend the packets. If there are multiple timeouts, the connection is dropped. TCP also implements [flow control](https://en.wikipedia.org/wiki/Flow_control_(data)) and [congestion control](https://en.wikipedia.org/wiki/Network_congestion#Congestion_control). These guarantees cause delays and generally result in less efficient transmission than UDP.
To ensure high throughput, web servers can keep a large number of TCP connections open, resulting in high memory usage. It can be expensive to have a large number of open connections between web server threads and say, a [memcached](https://memcached.org/) server. [Connection pooling](https://en.wikipedia.org/wiki/Connection_pool) can help in addition to switching to UDP where applicable.
TCP is useful for applications that require high reliability but are less time critical. Some examples include web servers, database info, SMTP, FTP, and SSH.
Use TCP over UDP when:
* You need all of the data to arrive intact
* You want to automatically make a best estimate use of the network throughput
### User datagram protocol (UDP)
<palign="center">
<imgsrc="http://i.imgur.com/yzDrJtA.jpg"/>
<br/>
<i><ahref=http://www.wildbunny.co.uk/blog/2012/10/09/how-to-make-a-multi-player-game-part-1/>Source: How to make a multiplayer game</a></i>
</p>
UDP is connectionless. Datagrams (analogous to packets) are guaranteed only at the datagram level. Datagrams might reach their destination out of order or not at all. UDP does not support congestion control. Without the guarantees that TCP support, UDP is generally more efficient.
UDP can broadcast, sending datagrams to all devices on the subnet. This is useful with [DHCP](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol) because the client has not yet received an IP address, thus preventing a way for TCP to stream without the IP address.
UDP is less reliable but works well in real time use cases such as VoIP, video chat, streaming, and realtime multiplayer games.
Use UDP over TCP when:
* You need the lowest latency
* Late data is worse than loss of data
* You want to implement your own error correction
#### Source(s) and further reading: TCP and UDP
* [Networking for game programming](http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/)
* [Key differences between TCP and UDP protocols](http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/)
* [Difference between TCP and UDP](http://stackoverflow.com/questions/5970383/difference-between-tcp-and-udp)
* [Transmission control protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)
* [Scaling memcache at Facebook](http://www.cs.bu.edu/~jappavoo/jappavoo.github.com/451/papers/memcache-fb.pdf)
### Remote procedure call (RPC)
<palign="center">
<imgsrc="http://i.imgur.com/iF4Mkb5.png"/>
<br/>
<i><ahref=http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview>Source: Crack the system design interview</a></i>
</p>
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include [Protobuf](https://developers.google.com/protocol-buffers/), [Thrift](https://thrift.apache.org/), and [Avro](https://avro.apache.org/docs/current/).
RPC is a request-response protocol:
* **Client program** - Calls the client stub procedure. The parameters are pushed onto the stack like a local procedure call.
* **Client stub procedure** - Marshals (packs) procedure id and arguments into a request message.
* **Client communication module** - OS sends the message from the client to the server.
* **Server communication module** - OS passes the incoming packets to the server stub procedure.
* **Server stub procedure** - Unmarshalls the results, calls the server procedure matching the procedure id and passes the given arguments.
* The server response repeats the steps above in reverse order.
Sample RPC calls:
```
GET /someoperation?data=anId
POST /anotheroperation
{
"data":"anId";
"anotherdata": "another value"
}
```
RPC is focused on exposing behaviors. RPCs are often used for performance reasons with internal communications, as you can hand-craft native calls to better fit your use cases.
Choose a native library (aka SDK) when:
* You know your target platform.
* You want to control how your "logic" is accessed.
* You want to control how error control happens off your library.
* Performance and end user experience is your primary concern.
HTTP APIs following **REST** tend to be used more often for public APIs.
#### Disadvantage(s): RPC
* RPC clients become tightly coupled to the service implementation.
* A new API must be defined for every new operation or use case.
* It can be difficult to debug RPC.
* You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure [RPC calls are properly cached](http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-rest/) on caching servers such as [Squid](http://www.squid-cache.org/).
### Representational state transfer (REST)
REST is an architectural style enforcing a client/server model where the client acts on a set of resources managed by the server. The server provides a representation of resources and actions that can either manipulate or get a new representation of resources. All communication must be stateless and cacheable.
There are four qualities of a RESTful interface:
* **Identify resources (URI in HTTP)** - use the same URI regardless of any operation.
* **Change with representations (Verbs in HTTP)** - use verbs, headers, and body.
* **Self-descriptive error message (status response in HTTP)** - Use status codes, don't reinvent the wheel.
* **[HATEOAS](http://restcookbook.com/Basics/hateoas/) (HTML interface for HTTP)** - your web service should be fully accessible in a browser.
Sample REST calls:
```
GET /someresources/anId
PUT /someresources/anId
{"anotherdata": "another value"}
```
REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, [representation through headers](https://github.com/for-GET/know-your-http-well/blob/master/headers.md), and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
#### Disadvantage(s): REST
* With REST being focused on exposing data, it might not be a good fit if resources are not naturally organized or accessed in a simple hierarchy. For example, returning all updated records from the past hour matching a particular set of events is not easily expressed as a path. With REST, it is likely to be implemented with a combination of URI path, query parameters, and possibly the request body.
* REST typically relies on a few verbs (GET, POST, PUT, DELETE, and PATCH) which sometimes doesn't fit your use case. For example, moving expired documents to the archive folder might not cleanly fit within these verbs.
* Fetching complicated resources with nested hierarchies requires multiple round trips between the client and server to render single views, e.g. fetching content of a blog entry and the comments on that entry. For mobile applications operating in variable network conditions, these multiple roundtrips are highly undesirable.
* Over time, more fields might be added to an API response and older clients will receive all new data fields, even those that they do not need, as a result, it bloats the payload size and leads to larger latencies.
* [Why REST for internal use and not RPC](http://arstechnica.com/civis/viewtopic.php?t=1190508)
## Security
This section could use some updates. Consider [contributing](#contributing)!
Security is a broad topic. Unless you have considerable experience, a security background, or are applying for a position that requires knowledge of security, you probably won't need to know more than the basics:
* Encrypt in transit and at rest.
* Sanitize all user inputs or any input parameters exposed to user to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting) and [SQL injection](https://en.wikipedia.org/wiki/SQL_injection).
* Use parameterized queries to prevent SQL injection.
* Use the principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).
* [Security guide for developers](https://github.com/FallibleInc/security-guide-for-developers)
* [OWASP top ten](https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet)
## Appendix
You'll sometimes be asked to do 'back-of-the-envelope' estimates. For example, you might need to determine how long it will take to generate 100 image thumbnails from disk or how much memory a data structure will take. The **Powers of two table** and **Latency numbers every programmer should know** are handy references.
* [Latency numbers every programmer should know - 1](https://gist.github.com/jboner/2841832)
* [Latency numbers every programmer should know - 2](https://gist.github.com/hellerbarde/2843375)
* [Designs, lessons, and advice from building large distributed systems](http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf)
* [Software Engineering Advice from Building Large-Scale Distributed Systems](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf)
> Common system design interview questions, with links to resources on how to solve each.
| Question | Reference(s) |
|---|---|
| Design a file sync service like Dropbox | [youtube.com](https://www.youtube.com/watch?v=PE4gwstWhmc) |
| Design a search engine like Google | [queue.acm.org](http://queue.acm.org/detail.cfm?id=988407)<br/>[stackexchange.com](http://programmers.stackexchange.com/questions/38324/interview-question-how-would-you-implement-google-search)<br/>[ardendertat.com](http://www.ardendertat.com/2012/01/11/implementing-search-engines/)<br/>[stanford.edu](http://infolab.stanford.edu/~backrub/google.html) |
| Design a scalable web crawler like Google | [quora.com](https://www.quora.com/How-can-I-build-a-web-crawler-from-scratch) |
| Design Google docs | [code.google.com](https://code.google.com/p/google-mobwrite/)<br/>[neil.fraser.name](https://neil.fraser.name/writing/sync/) |
| Design a key-value store like Redis | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| Design a cache system like Memcached | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Design a recommendation system like Amazon's | [hulu.com](https://web.archive.org/web/20170406065247/http://tech.hulu.com/blog/2011/09/19/recommendation-system.html)<br/>[ijcai13.org](http://ijcai13.org/files/tutorial_slides/td3.pdf) |
| Design a tinyurl system like Bitly | [n00tc0d3r.blogspot.com](http://n00tc0d3r.blogspot.com/) |
| Design a chat app like WhatsApp | [highscalability.com](http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html)
| Design a picture sharing system like Instagram | [highscalability.com](http://highscalability.com/flickr-architecture)<br/>[highscalability.com](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html) |
| Design the Facebook news feed function | [quora.com](http://www.quora.com/What-are-best-practices-for-building-something-like-a-News-Feed)<br/>[quora.com](http://www.quora.com/Activity-Streams/What-are-the-scaling-issues-to-keep-in-mind-while-developing-a-social-network-feed)<br/>[slideshare.net](http://www.slideshare.net/danmckinley/etsy-activity-feeds-architecture) |
| Design the Facebook timeline function | [facebook.com](https://www.facebook.com/note.php?note_id=10150468255628920)<br/>[highscalability.com](http://highscalability.com/blog/2012/1/23/facebook-timeline-brought-to-you-by-the-power-of-denormaliza.html) |
| Design the Facebook chat function | [erlang-factory.com](http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf)<br/>[facebook.com](https://www.facebook.com/note.php?note_id=14218138919&id=9445547199&index=0) |
| Design a graph search function like Facebook's | [facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-out-the-infrastructure-for-graph-search/10151347573598920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-indexing-and-ranking-in-graph-search/10151361720763920)<br/>[facebook.com](https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-natural-language-interface-of-graph-search/10151432733048920) |
| Design a content delivery network like CloudFlare | [figshare.com](https://figshare.com/articles/Globally_distributed_content_delivery/6605972) |
| Design a trending topic system like Twitter's | [michael-noll.com](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/)<br/>[snikolov .wordpress.com](http://snikolov.wordpress.com/2012/11/14/early-detection-of-twitter-trends/) |
| Design a random ID generation system | [blog.twitter.com](https://blog.twitter.com/2010/announcing-snowflake)<br/>[github.com](https://github.com/twitter/snowflake/) |
| Return the top k requests during a time interval | [cs.ucsb.edu](https://www.cs.ucsb.edu/sites/cs.ucsb.edu/files/docs/reports/2005-23.pdf)<br/>[wpi.edu](http://davis.wpi.edu/xmdv/docs/EDBT11-diyang.pdf) |
| Design a system that serves data from multiple data centers | [highscalability.com](http://highscalability.com/blog/2009/8/24/how-google-serves-data-from-multiple-datacenters.html) |
| Design an online multiplayer card game | [indieflashblog.com](http://www.indieflashblog.com/how-to-create-an-asynchronous-multiplayer-game.html)<br/>[buildnewgames.com](http://buildnewgames.com/real-time-multiplayer/) |
| Design a garbage collection system | [stuffwithstuff.com](http://journal.stuffwithstuff.com/2013/12/08/babys-first-garbage-collector/)<br/>[washington.edu](http://courses.cs.washington.edu/courses/csep521/07wi/prj/rick.pdf) |
| Design an API rate limiter | [https://stripe.com/blog/](https://stripe.com/blog/rate-limiters) |
| Add a system design question | [Contribute](#contributing) |
> Articles on how real world systems are designed.
<palign="center">
<imgsrc="http://i.imgur.com/TcUo2fw.png"/>
<br/>
<i><ahref=https://www.infoq.com/presentations/Twitter-Timeline-Scalability>Source: Twitter timelines at scale</a></i>
</p>
**Don't focus on nitty gritty details for the following articles, instead:**
* Identify shared principles, common technologies, and patterns within these articles
* Study what problems are solved by each component, where it works, where it doesn't
* Review the lessons learned
|Type | System | Reference(s) |
|---|---|---|
| Data processing | **MapReduce** - Distributed data processing from Google | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/mapreduce-osdi04.pdf) |
| Data processing | **Spark** - Distributed data processing from Databricks | [slideshare.net](http://www.slideshare.net/AGrishchenko/apache-spark-architecture) |
| Data processing | **Storm** - Distributed data processing from Twitter | [slideshare.net](http://www.slideshare.net/previa/storm-16094009) |
| | | |
| Data store | **Bigtable** - Distributed column-oriented database from Google | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/chang06bigtable.pdf) |
| Data store | **HBase** - Open source implementation of Bigtable | [slideshare.net](http://www.slideshare.net/alexbaranau/intro-to-hbase) |
| Data store | **Cassandra** - Distributed column-oriented database from Facebook | [slideshare.net](http://www.slideshare.net/planetcassandra/cassandra-introduction-features-30103666)
| Data store | **DynamoDB** - Document-oriented database from Amazon | [harvard.edu](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf) |
| Data store | **MongoDB** - Document-oriented database | [slideshare.net](http://www.slideshare.net/mdirolf/introduction-to-mongodb) |
| Data store | **Spanner** - Globally-distributed database from Google | [research.google.com](http://research.google.com/archive/spanner-osdi2012.pdf) |
| Data store | **Memcached** - Distributed memory caching system | [slideshare.net](http://www.slideshare.net/oemebamo/introduction-to-memcached) |
| Data store | **Redis** - Distributed memory caching system with persistence and value types | [slideshare.net](http://www.slideshare.net/dvirsky/introduction-to-redis) |
| | | |
| File system | **Google File System (GFS)** - Distributed file system | [research.google.com](http://static.googleusercontent.com/media/research.google.com/zh-CN/us/archive/gfs-sosp2003.pdf) |
| File system | **Hadoop File System (HDFS)** - Open source implementation of GFS | [apache.org](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) |
| | | |
| Misc | **Chubby** - Lock service for loosely-coupled distributed systems from Google | [research.google.com](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/chubby-osdi06.pdf) |
| ESPN | [Operating At 100,000 duh nuh nuhs per second](http://highscalability.com/blog/2013/11/4/espns-architecture-at-scale-operating-at-100000-duh-nuh-nuhs.html) |
| Google | [Google architecture](http://highscalability.com/google-architecture) |
| Instagram | [14 million users, terabytes of photos](http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html)<br/>[What powers Instagram](http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances) |
| Justin.tv | [Justin.Tv's live video broadcasting architecture](http://highscalability.com/blog/2010/3/16/justintvs-live-video-broadcasting-architecture.html) |
| Facebook | [Scaling memcached at Facebook](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf)<br/>[TAO: Facebook’s distributed data store for the social graph](https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/data-store/tao-facebook-distributed-datastore-atc-2013.pdf)<br/>[Facebook’s photo storage](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf)<br/>[How Facebook Live Streams To 800,000 Simultaneous Viewers](http://highscalability.com/blog/2016/6/27/how-facebook-live-streams-to-800000-simultaneous-viewers.html) |
| Mailbox | [From 0 to one million users in 6 weeks](http://highscalability.com/blog/2013/6/18/scaling-mailbox-from-0-to-one-million-users-in-6-weeks-and-1.html) |
| Netflix | [A 360 Degree View Of The Entire Netflix Stack](http://highscalability.com/blog/2015/11/9/a-360-degree-view-of-the-entire-netflix-stack.html)<br/>[Netflix: What Happens When You Press Play?](http://highscalability.com/blog/2017/12/11/netflix-what-happens-when-you-press-play.html) |
| Pinterest | [From 0 To 10s of billions of page views a month](http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html)<br/>[18 million visitors, 10x growth, 12 employees](http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html) |
| Playfish | [50 million monthly users and growing](http://highscalability.com/blog/2010/9/21/playfishs-social-gaming-architecture-50-million-monthly-user.html) |
| Tumblr | [15 billion page views a month](http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html) |
| Twitter | [Making Twitter 10000 percent faster](http://highscalability.com/scaling-twitter-making-twitter-10000-percent-faster)<br/>[Storing 250 million tweets a day using MySQL](http://highscalability.com/blog/2011/12/19/how-twitter-stores-250-million-tweets-a-day-using-mysql.html)<br/>[150M active users, 300K QPS, a 22 MB/S firehose](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)<br/>[Timelines at scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability)<br/>[Big and small data at Twitter](https://www.youtube.com/watch?v=5cKTP36HVgI)<br/>[Operations at Twitter: scaling beyond 100 million users](https://www.youtube.com/watch?v=z8LU0Cj6BOU)<br/>[How Twitter Handles 3,000 Images Per Second](http://highscalability.com/blog/2016/4/20/how-twitter-handles-3000-images-per-second.html) |
| Uber | [How Uber scales their real-time market platform](http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html)<br/>[Lessons Learned From Scaling Uber To 2000 Engineers, 1000 Services, And 8000 Git Repositories](http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html) |
* [A distributed systems reading list](http://dancres.github.io/Pages/)
* [Cracking the system design interview](http://www.puncsky.com/blog/2016-02-13-crack-the-system-design-interview)
## Contact info
Feel free to contact me to discuss any issues, questions, or comments.
My contact info can be found on my [GitHub page](https://github.com/donnemartin).
## License
*I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer (Facebook).*
Copyright 2017 Donne Martin
Creative Commons Attribution 4.0 International License (CC BY 4.0)