98 KiB
Executable File
- 原文地址:github.com/donnemartin/system-design-primer
- 译文出自:掘金翻译计划
- 译者:
- 校对者:
- 这个 链接 用来查看本翻译与英文版是否有差别(如果你没有看到 README.md 发生变化,那就意味着这份翻译文档是最新的)。
系统设计入门
目的
学习如何设计大型系统。
为系统设计面试做准备。
学习如何设计大型系统
学习如何设计大型系统将会帮助你成为一个更好的工程师。
系统设计是一个很宽泛的话题。在互联网上,关于系统设计原则的资源也是多如牛毛。
这个仓库就是这些资源的有组织的集合,它可以帮助你学习如何构建可扩展的系统。
从开源社区学习
这是一个不断更新的开源项目的初期的版本。
欢迎 贡献 !
为系统设计面试做准备
在很多科技公司中,除了代码面试,系统设计也是技术面试过程中的一个必要环节。
练习普通的系统设计面试题并且把你的结果和例子的解答进行对照:讨论,代码和图表。
面试准备的其他主题:
抽认卡
这里提供的 抽认卡堆 使用间隔重复的方法帮助你记住系统设计的概念。
用起来非常棒。
贡献
向社区学习。
欢迎提交 PR 提供帮助:
- 修复错误
- 完善章节
- 添加章节
一些还需要完善的内容放在了开发中。
查看 贡献指导。
翻译
对翻译感兴趣?请查看这个 链接。
系统设计主题的索引
各种系统设计主题的摘要,包括优点和缺点。每一个主题都面临着取舍和权衡。
每个章节都包含更深层次的资源的链接。
- 系统设计主题:从这里开始
- 性能与拓展性
- 延迟与吞吐量
- 可用性与一致性
- 一致模式
- 可用模式
- 域名系统
- CDN
- 负载均衡器
- 反向代理(web 服务)
- 应用层
- 数据库
- 缓存
- 异步
- 通讯
- 网络安全
- 附录
- 开发中
- 致谢
- 联系方式
- 许可
学习指引
基于你面试的时间线(短,中,长)去复习那些推荐的主题。
问:对于面试来说,我需要知道这里的所有知识点吗?
答:不,如果只是为了准备面试的话,你并不需要知道所有的知识点。
在一场面试中你会被问到什么取决于下面这些因素:
- 你的经验
- 你的技术背景
- 你面试的职位
- 你面试的公司
- 运气
那些有经验的候选人通常会被期望了解更多的系统设计的知识。架构师或者团队负责人则会被期望了解更多除了个人贡献之外的知识。顶级的科技公司通常也会有一次或者更多的系统设计面试。
面试会很宽泛的展开并在几个领域深入。这回帮助你了解一些关于系统设计的不同的主题。基于你的时间线,经验,面试的职位和面试的公司对下面的指导做出适当的调整。
- 短期 - 以系统设计主题的广度为目标。通过解决一些面试题来练习。
- 中期 - 以系统设计主题的广度和初级深度为目标。通过解决很多面试题来练习。
- 长期 - 以系统设计主题的广度和高级深度为目标。通过解决大部分面试题来联系。
短期 | 中期 | 长期 | |
---|---|---|---|
阅读 系统设计主题 以获得一个关于系统如何工作的宽泛的认识 | 👍 | 👍 | 👍 |
阅读一些你要面试的 公司工程博客 的文章 | 👍 | 👍 | 👍 |
阅读 真实世界的架构 | 👍 | 👍 | 👍 |
复习 如何处理一个系统设计面试题 | 👍 | 👍 | 👍 |
完成 系统设计面试题和解答 | 一些 | 很多 | 大部分 |
完成 面向对象设计面试题和解答 | 一些 | 很多 | 大部分 |
复习 其他系统设计面试题和解答 | 一些 | 很多 | 大部分 |
如何处理一个系统设计面试题
如何处理一个系统设计面试题。
系统设计面试是一个开放式的对话。他们期望你去主导这个对话。
你可以使用下面的步骤来指引讨论。为了巩固这个过程,请使用下面的步骤完成 系统设计面试题和解答 这个章节。
第一步:描述使用场景,约束和假设
把所有需要的东西聚集在一起,审视问题。不停的提问,以至于我们可以明确使用场景和约束。讨论假设。
- 谁会使用它?
- 他们会怎样使用它?
- 有多少用户?
- 系统的作用是什么?
- 系统的输入输出分别是什么?
- 我们希望处理多少数据?
- 我们希望每秒钟处理多少请求?
- 我们希望的读写比率?
第二步:创造一个高级的设计
使用所有重要的组件来描绘出一个高级的设计。
- 画出主要的组件和连接
- 证明你的想法
第三步:设计核心组件
对每一个核心组件进行详细深入的分析。举例来说,如果你被问到 设计一个 url 缩写服务,开始讨论:
- 生成并储存一个完整 url 的 hash
- 将一个 hashed url 翻译成完整的 url
- 数据库查找
- API 和面向对象设计
第四步:度量设计
确认和处理瓶颈以及一些限制。举例来说就是你需要下面的这些来完成拓展性的议题吗?
- 负载均衡
- 水平拓展
- 缓存
- 数据库分片
论述可能的解决办法和代价。每件事情需要取舍。可以使用 可拓展系统的设计原则 来处理瓶颈。
信封背面的计算
你或许会被要求通过手算进行一些估算。涉及到的 附录 涉及到的是下面的这些资源:
相关资源和延伸阅读
查看下面的链接以获得我们期望的更好的想法:
系统设计面试题和解答
普通的系统设计面试题和相关事例的论述,代码和图表。
与内容有关的解答在
solutions/
文件夹中。
问题 设计 Pastebin.com (或者 Bit.ly) 解答 设计 Twitter 时间线和搜索 (或者 Facebook feed 和搜索) 解答 设计一个网页爬虫 解答 设计 Mint.com 解答 为一个社交网络设计数据结构 解答 为搜索引擎设计一个 key-value 储存 解答 通过分类特性设计 Amazon 的销售排名 解答 在 AWS 上设计一个百万用户级别的系统 解答 添加一个系统设计问题 贡献
设计 Pastebin.com (或者 Bit.ly)
设计 Twitter 时间线和搜索 (或者 Facebook feed 和搜索)
设计一个网页爬虫
设计 Mint.com
为一个社交网络设计数据结构
为搜索引擎设计一个 key-value 储存
通过分类特性设计 Amazon 的销售排名
在 AWS 上设计一个百万用户级别的系统
面向对象设计面试问题及解答
常见面向对象设计面试问题及实例讨论,代码和图表演示。
与内容相关的解决方案在
solutions/
文件夹中。
注:此节还在完善中
问题 | |
---|---|
设计 hash map | 解决方案 |
设计 LRU 缓存 | 解决方案 |
设计一个呼叫中心 | 解决方案 |
设计一副牌 | 解决方案 |
设计一个停车场 | 解决方案 |
设计一个聊天服务 | 解决方案 |
设计一个环形数组 | 待解决 |
添加一个面向对象设计问题 | 待解决 |
系统设计主题:从这里开始
不熟悉系统设计?
首先,你需要对一般性原则有一个基本的认识,知道它们是什么,怎样使用以及利弊。
第一步:回顾可扩展性(scalability)的视频讲座
- 主题涵盖
- 垂直扩展(Vertical scaling)
- 水平扩展(Horizontal scaling)
- 缓存
- 负载均衡
- 数据库复制
- 数据库分区
第二步:回顾可扩展性文章
接下来的步骤
接下来,我们将看看高阶的权衡和取舍:
- 性能与可扩展性
- 延迟与吞吐量
- 可用性与一致性
记住每个方面都面临取舍和权衡。
然后,我们将深入更具体的主题,如 DNS,CDN 和负载均衡器。
性能与可扩展性
如果服务性能的增长与资源的增加是成比例的,服务就是可扩展的。通常,提高性能意味着服务于更多的工作单元,另一方面,当数据集增长时,同样也可以处理更大的工作单位。1
另一个角度来看待性能与可扩展性:
- 如果你的系统有 性能 问题,对于单个用户来说是缓慢的。
- 如果你的系统有 可扩展性 问题,单个用户较快但在高负载下会变慢。
来源及延伸阅读
延迟与吞吐量
延迟是执行操作或运算结果所花费的时间。
吞吐量是单位时间内(执行)此类操作或运算的数量。
通常,你应该以可接受级延迟下最大化吞吐量为目标。
来源及延伸阅读
可用性与一致性
CAP 理论
在一个分布式计算系统中,只能同时满足下列的两点:
- 一致性 - 每次访问都能获得最新数据但可能会收到错误响应
- 可用性 - 每次访问都能收到非错响应,但不保证获取到最新数据
- 分区容错性 - 在任意分区网络故障的情况下系统仍能继续运行
网络并不可靠,所以你应要支持分区容错性,并需要在软件可用性和一致性间做出取舍。
CP - 一致性和分区容错性
等待分区节点的响应可能会导致延时错误。如果你的业务需求需要原子读写,CP 是一个不错的选择。
AP - 可用性与分区容错性
响应返回的最近版本数据可能并不是最新的。当分区解析完后,写入(操作)可能要花一些时间来传播。
如果业务需求允许最终一致性,或当有外部故障时要求系统继续运行,AP 是一个不错的选择。
来源及延伸阅读
一致性模式
有同一份数据的多份副本,我们面临着怎样同步它们的选择,以便让客户端有一致的显示数据。回想 CAP 定理中的一致性定义 - 每次访问都能获得最新数据但可能会收到错误响应
弱一致性
在写入之后,访问可能看到,也可能看不到(写入数据)。尽力优化之让其可见。
这种方式可以 memcached 等系统中看到。弱一致性在 VoIP,视频聊天和实时多人游戏等真实用例中表现不错。打个比方,如果你在通话中丢失信号几秒钟时间,当重新连接时你是听不到这几秒钟所说的话的。
最终一致性
在写入后,访问最终能看到写入数据(通常在数毫秒内)。数据被异步复制。
DNS 和 email 等系统使用的是此种方式。最终一致性在高可用性系统中效果不错。
强一致性
在写入后,访问立即可见。数据被同步复制。
文件系统和关系型数据库(RDBMS)中使用的是此种方式。强一致性在需要记录的系统中运作良好。
来源及延伸阅读
可用性模式
有两种支持高可用性的模式: 故障切换(fail-over)和复制(replication)。
故障切换
工作到备用切换(Active-passive)
关于工作到备用的故障切换,工作服务器发送周期信号给等待中的备用服务器。如果周期信号中断,备用服务器切换成工作服务器的 IP 地址并恢复服务。
宕机时间取决于备用服务器处于‘热‘等待状态还是需要从‘冷‘等待状态进行启动。只有工作服务器处理流量。
工作到备用的故障切换也被称为主从切换。
双工作切换(Active-active)
在双工作切换中,双方都在管控流量,在它们之间分散负载。
如果是外网服务器,DNS 将需要对两方都了解。如果是内网服务器,应用程序逻辑将需要对两方都了解。
双工作切换也可以称为主主切换。
缺陷:故障切换
- 故障切换需要添加额外硬件并增加复杂性。
- 如果新写入数据在能被复制到备用系统之前,工作系统出现了故障,则有可能会丢失数据。
复制
主-从复制和主-主复制
这个主题进一步探讨了数据库部分:
域名系统
域名系统是把 www.example.com 等域名转换成 IP 地址。
域名系统是分层次的,一些 DNS 服务器位于顶层。当查询(域名) IP 时,路由或 ISP 提供连接 DNS 服务器的信息。较低层 DNS 服务器缓存映射,它可能会因为 DNS 传播延时而失效。DNS 结果可以缓存在浏览器或操作系统中一段时间,时间长短取决于存活时间 TTL。
- NS 记录(域名服务) - 指定解析域名或子域名的 DNS 服务器。
- MX 记录(邮件交换) - 指定接收信息的邮件服务器。
- A 记录(地址) - 指定域名对应的 IP 地址记录。
- CNAME(规范) - 一个域名映射到另一个域名或
CNAME
记录(example.com 指向 www.example.com)或映射到一个A
记录。
CloudFlare 和 Route 53 等平台提供管理 DNS 的功能。某些 DNS 服务通过集中方式来路由流量:
- 加权轮询调度
- 防止流量进入维护中的服务器
- 在不同大小集群间负载均衡
- A/B 测试
- 基于延迟路由
- 基于地理位置路由
缺陷:DNS
- 虽说缓存可以来减轻 DNS 延迟,但连接 DNS 服务器还是带来了轻微的延迟。
- 虽然它们通常由政府,网络服务提供商和大公司管理,但 DNS 服务管理仍可能是复杂的。
- DNS 服务最近遭受 DDoS 攻击,防止不知道 Twtter IP 地址的用户访问 Twiiter。
来源及延伸阅读
内容分发网络
内容分发网络是一个全球性的代理服务器分布式网络,它从靠近用户的位置提供内容。通常,HTML/CSS/JS,图片和视频等静态内容由 CDN 提供,虽然亚马逊 CloudFront 等也支持动态内容。CDN 的 DNS 解析会告知客户端连接哪台服务器。
将内容存储在 CDN 上可以从两个方面来提供性能:
- 从靠近用户的数据中心提供资源
- 通过 CDN 你的服务器不必真的处理请求
CDN 推送(push)
当你服务器上内容发生变动时,推送给 CDN 接受新的内容。你负责提供内容,直接推送给 CDN 并重写 URL 地址以指向 CDN 地址。你可以配置内容到期时间及何时更新。内容只有在更改或新增是才推送,最小化流量,但最大化存储空间。
CDN 拉取(pull)
CDN 拉取是当第一个用户请求该资源时,从服务器上拉取资源。你将内容留在自己的服务器上并重写 URL 指向 CDN 地址。这样请求会更慢,直到内容被缓存在 CDN 上,。
存活时间(TTL)决定缓存多久时间。CDN 拉取方式最小化 CDN 上的储存空间,但如果过期文件并在实际更改之前被拉取,则会导致冗余的流量。
高流量站点使用 CDN 拉取效果不错,因为只有最近请求的内容保存在 CDN 中,流量才能更平衡地分散。
缺陷:CDN
- CDN 成本可能因流量而异,可能在权衡之后你将不会使用 CDN。
- 如果在 TTL 过期之前更新内容,CDN 缓存内容可能会过时。
- CDN 需要更改静态内容的 URL 地址以指向 CDN。
来源及延伸阅读
负载均衡器
Source: Scalable system design patterns
负载均衡器将传入的请求分发到应用服务器和数据库等计算资源。无论哪种情况,负载均衡器将从计算资源来的响应返回给恰当的客户端。负载均衡器的效用在于:
- 防止请求进入不好的服务器
- 防止资源过载
- 帮助消除单一的故障点
负载均衡器可以通过硬件(昂贵)或 HAProxy 等软件来实现。 增加的益处包括:
- SSL 终结 - 解密传入的请求并加密服务器响应,这样的话后端服务器就不必再执行这些潜在高消耗运算了。
- 不需要再每台服务器上安装 X.509 证书。
- Session 留存 - 如果 Web 应用程序不追踪会话,发出 cookie 并将特定客户端的请求路由到同一实例。
通常会设置采用工作-备用 或 双工作 模式的多个负载均衡器,以免发生故障。
负载均衡器能基于多种方式来路由流量:
- 随机
- 最少负载
- Session/cookies
- 轮询调度或加权轮询调度算法
- 四层负载均衡
- 七层负载均衡
四层负载均衡
四层负载均衡根据监看传输层的信息来决定如何分发请求。通常,这会涉及来源,目标 IP 地址和请求头中的端口,但不包括数据包(报文)内容。四层负载均衡执行网络地址转换(NAT)来向上游服务器转发网络数据包。
七层负载均衡器
七层负载均衡器根据监看应用层来决定怎样分发请求。这会涉及请求头的内容,消息和 cookie。七层负载均衡器终结网络流量,读取消息,做出负载均衡判定,然后传送给特定服务器。比如,一个七层负载均衡器能直接将视频流量连接到托管视频的服务器,同时将更敏感的用户账单流量引导到安全性更强的服务器。
以损失灵活性为代价,四层负载均衡比七层负载均衡只需更少时间和计算资源,虽然这在现代商用硬件上的性能影响甚微。
水平扩展
负载均衡器还能帮助水平扩展,提高性能和可用性。使用商业硬件的性价比更高,并且比在单台硬件上纵向扩展更贵的硬件具有更高的可用性。招聘商业硬件的人才比特定企业系统的人才更容易。
缺陷:水平扩展
- 水平扩展引入了复杂度并设计服务器复制
- 服务器应该是无状态的:它们也不该包含任何与用户关联的数据,像 session 或资料图片。
- session 可以集中存储在数据库或持久化缓存(Redis, Memcached)的数据存储区中。
- 缓存和数据库等下游服务器需要随着上游服务器进行扩展,以处理更多的并发连接。
缺陷:负载均衡器
- 如果没有足够的资源配置或配置错误,负载均衡器会变成一个性能瓶颈。
- 引入负载均衡器以帮助消除单点故障但导致了额外的复杂性。
- 单个负载均衡器会导致单点故障,但配置多个负载均衡器会进一步增加复杂性。
来源及延伸阅读
- NGINX architecture
- HAProxy architecture guide
- Scalability
- Wikipedia
- Layer 4 load balancing
- Layer 7 load balancing
- ELB listener config
Reverse proxy (web server)
A reverse proxy is a web server that centralizes internal services and provides unified interfaces to the public. Requests from clients are forwarded to a server that can fulfill it before the reverse proxy returns the server's response to the client.
Additional benefits include:
- Increased security - Hide information about backend servers, blacklist IPs, limit number of connections per client
- Increased scalability and flexibility - Clients only see the reverse proxy's IP, allowing you to scale servers or change their configuration
- SSL termination - Decrypt incoming requests and encrypt server responses so backend servers do not have to perform these potentially expensive operations
- Removes the need to install X.509 certificates on each server
- Compression - Compress server responses
- Caching - Return the response for cached requests
- Static content - Serve static content directly
- HTML/CSS/JS
- Photos
- Videos
- Etc
Load balancer vs reverse proxy
- Deploying a load balancer is useful when you have multiple servers. Often, load balancers route traffic to a set of servers serving the same function.
- Reverse proxies can be useful even with just one web server or application server, opening up the benefits described in the previous section.
- Solutions such as NGINX and HAProxy can support both layer 7 reverse proxying and load balancing.
Disadvantage(s): reverse proxy
- Introducing a reverse proxy results in increased complexity.
- A single reverse proxy is a single point of failure, configuring multiple reverse proxies (ie a failover) further increases complexity.
Source(s) and further reading
Application layer
Source: Intro to architecting systems for scale
Separating out the web layer from the application layer (also known as platform layer) allows you to scale and configure both layers independently. Adding a new API results in adding application servers without necessarily adding additional web servers.
The single responsibility principle advocates for small and autonomous services that work together. Small teams with small services can plan more aggressively for rapid growth.
Workers in the application layer also help enable asynchronism.
Microservices
Related to this discussion are microservices, which can be described as a suite of independently deployable, small, modular services. Each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. 1
Pinterest, for example, could have the following microservices: user profile, follower, feed, search, photo upload, etc.
Service Discovery
Systems such as Zookeeper can help services find each other by keeping track of registered names, addresses, ports, etc.
Disadvantage(s): application layer
- Adding an application layer with loosely coupled services requires a different approach from an architectural, operations, and process viewpoint (vs a monolithic system).
- Microservices can add complexity in terms of deployments and operations.
Source(s) and further reading
- Intro to architecting systems for scale
- Crack the system design interview
- Service oriented architecture
- Introduction to Zookeeper
- Here's what you need to know about building microservices
Database
Source: Scaling up to your first 10 million users
Relational database management system (RDBMS)
A relational database like SQL is a collection of data items organized in tables.
ACID is a set of properties of relational database transactions.
- Atomicity - Each transaction is all or nothing
- Consistency - Any transaction will bring the database from one valid state to another
- Isolation - Executing transactions concurrently has the same results as if the transactions were executed serially
- Durability - Once a transaction has been committed, it will remain so
There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
Master-slave replication
The master serves reads and writes, replicating writes to one or more slaves, which serve only reads. Slaves can also replicate to additional slaves in a tree-like fashion. If the master goes offline, the system can continue to operate in read-only mode until a slave is promoted to a master or a new master is provisioned.
Source: Scalability, availability, stability, patterns
Disadvantage(s): master-slave replication
- Additional logic is needed to promote a slave to a master.
- See Disadvantage(s): replication for points related to both master-slave and master-master.
Master-master replication
Both masters serve reads and writes and coordinate with each other on writes. If either master goes down, the system can continue to operate with both reads and writes.
Source: Scalability, availability, stability, patterns
Disadvantage(s): master-master replication
- You'll need a load balancer or you'll need to make changes to your application logic to determine where to write.
- Most master-master systems are either loosely consistent (violating ACID) or have increased write latency due to synchronization.
- Conflict resolution comes more into play as more write nodes are added and as latency increases.
- See Disadvantage(s): replication for points related to both master-slave and master-master.
Disadvantage(s): replication
- There is a potential for loss of data if the master fails before any newly written data can be replicated to other nodes.
- Writes are replayed to the read replicas. If there are a lot of writes, the read replicas can get bogged down with replaying writes and can't do as many reads.
- The more read slaves, the more you have to replicate, which leads to greater replication lag.
- On some systems, writing to the master can spawn multiple threads to write in parallel, whereas read replicas only support writing sequentially with a single thread.
- Replication adds more hardware and additional complexity.
Source(s) and further reading: replication
Federation
Source: Scaling up to your first 10 million users
Federation (or functional partitioning) splits up databases by function. For example, instead of a single, monolithic database, you could have three databases: forums, users, and products, resulting in less read and write traffic to each database and therefore less replication lag. Smaller databases result in more data that can fit in memory, which in turn results in more cache hits due to improved cache locality. With no single central master serializing writes you can write in parallel, increasing throughput.
Disadvantage(s): federation
- Federation is not effective if your schema requires huge functions or tables.
- You'll need to update your application logic to determine which database to read and write.
- Joining data from two databases is more complex with a server link.
- Federation adds more hardware and additional complexity.
Source(s) and further reading: federation
Sharding
Source: Scalability, availability, stability, patterns
Sharding distributes data across different databases such that each database can only manage a subset of the data. Taking a users database as an example, as the number of users increases, more shards are added to the cluster.
Similar to the advantages of federation, sharding results in less read and write traffic, less replication, and more cache hits. Index size is also reduced, which generally improves performance with faster queries. If one shard goes down, the other shards are still operational, although you'll want to add some form of replication to avoid data loss. Like federation, there is no single central master serializing writes, allowing you to write in parallel with increased throughput.
Common ways to shard a table of users is either through the user's last name initial or the user's geographic location.
Disadvantage(s): sharding
- You'll need to update your application logic to work with shards, which could result in complex SQL queries.
- Data distribution can become lopsided in a shard. For example, a set of power users on a shard could result in increased load to that shard compared to others.
- Rebalancing adds additional complexity. A sharding function based on consistent hashing can reduce the amount of transferred data.
- Joining data from multiple shards is more complex.
- Sharding adds more hardware and additional complexity.
Source(s) and further reading: sharding
Denormalization
Denormalization attempts to improve read performance at the expense of some write performance. Redundant copies of the data are written in multiple tables to avoid expensive joins. Some RDBMS such as PostgreSQL and Oracle support materialized views which handle the work of storing redundant information and keeping redundant copies consistent.
Once data becomes distributed with techniques such as federation and sharding, managing joins across data centers further increases complexity. Denormalization might circumvent the need for such complex joins.
In most systems, reads can heavily number writes 100:1 or even 1000:1. A read resulting in a complex database join can be very expensive, spending a significant amount of time on disk operations.
Disadvantage(s): denormalization
- Data is duplicated.
- Constraints can help redundant copies of information stay in sync, which increases complexity of the database design.
- A denormalized database under heavy write load might perform worse than its normalized counterpart.
Source(s) and further reading: denormalization
SQL tuning
SQL tuning is a broad topic and many books have been written as reference.
It's important to benchmark and profile to simulate and uncover bottlenecks.
- Benchmark - Simulate high-load situations with tools such as ab.
- Profile - Enable tools such as the slow query log to help track performance issues.
Benchmarking and profiling might point you to the following optimizations.
Tighten up the schema
- MySQL dumps to disk in contiguous blocks for fast access.
- Use
CHAR
instead ofVARCHAR
for fixed-length fields.CHAR
effectively allows for fast, random access, whereas withVARCHAR
, you must find the end of a string before moving onto the next one.
- Use
TEXT
for large blocks of text such as blog posts.TEXT
also allows for boolean searches. Using aTEXT
field results in storing a pointer on disk that is used to locate the text block. - Use
INT
for larger numbers up to 2^32 or 4 billion. - Use
DECIMAL
for currency to avoid floating point representation errors. - Avoid storing large
BLOBS
, store the location of where to get the object instead. VARCHAR(255)
is the largest number of characters that can be counted in an 8 bit number, often maximizing the use of a byte in some RDBMS.- Set the
NOT NULL
constraint where applicable to improve search performance.
Use good indices
- Columns that you are querying (
SELECT
,GROUP BY
,ORDER BY
,JOIN
) could be faster with indices. - Indices are usually represented as self-balancing B-tree that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time.
- Placing an index can keep the data in memory, requiring more space.
- Writes could also be slower since the index also needs to be updated.
- When loading large amounts of data, it might be faster to disable indices, load the data, then rebuild the indices.
Avoid expensive joins
- Denormalize where performance demands it.
Partition tables
- Break up a table by putting hot spots in a separate table to help keep it in memory.
Tune the query cache
- In some cases, the query cache could lead to performance issues.
Source(s) and further reading: SQL tuning
- Tips for optimizing MySQL queries
- Is there a good reason i see VARCHAR(255) used so often?
- How do null values affect performance?
- Slow query log
NoSQL
NoSQL is a collection of data items represented in a key-value store, document-store, wide column store, or a graph database. Data is denormalized, and joins are generally done in the application code. Most NoSQL stores lack true ACID transactions and favor eventual consistency.
BASE is often used to describe the properties of NoSQL databases. In comparison with the CAP Theorem, BASE chooses availability over consistency.
- Basically available - the system guarantees availability.
- Soft state - the state of the system may change over time, even without input.
- Eventual consistency - the system will become consistent over a period of time, given that the system doesn't receive input during that period.
In addition to choosing between SQL or NoSQL, it is helpful to understand which type of NoSQL database best fits your use case(s). We'll review key-value stores, document-stores, wide column stores, and graph databases in the next section.
Key-value store
Abstraction: hash table
A key-value store generally allows for O(1) reads and writes and is often backed by memory or SSD. Data stores can maintain keys in lexicographic order, allowing efficient retrieval of key ranges. Key-value stores can allow for storing of metadata with a value.
Key-value stores provide high performance and are often used for simple data models or for rapidly-changing data, such as an in-memory cache layer. Since they offer only a limited set of operations, complexity is shifted to the application layer if additional operations are needed.
A key-value store is the basis for more complex systems such as a document store, and in some cases, a graph database.
Source(s) and further reading: key-value store
Document store
Abstraction: key-value store with documents stored as values
A document store is centered around documents (XML, JSON, binary, etc), where a document stores all information for a given object. Document stores provide APIs or a query language to query based on the internal structure of the document itself. Note, many key-value stores include features for working with a value's metadata, blurring the lines between these two storage types.
Based on the underlying implementation, documents are organized in either collections, tags, metadata, or directories. Although documents can be organized or grouped together, documents may have fields that are completely different from each other.
Some document stores like MongoDB and CouchDB also provide a SQL-like language to perform complex queries. DynamoDB supports both key-values and documents.
Document stores provide high flexibility and are often used for working with occasionally changing data.
Source(s) and further reading: document store
Wide column store
Source: SQL & NoSQL, a brief history
Abstraction: nested map
ColumnFamily<RowKey, Columns<ColKey, Value, Timestamp>>
A wide column store's basic unit of data is a column (name/value pair). A column can be grouped in column families (analogous to a SQL table). Super column families further group column families. You can access each column independently with a row key, and columns with the same row key form a row. Each value contains a timestamp for versioning and for conflict resolution.
Google introduced Bigtable as the first wide column store, which influenced the open-source HBase often-used in the Hadoop ecosystem, and Cassandra from Facebook. Stores such as BigTable, HBase, and Cassandra maintain keys in lexicographic order, allowing efficient retrieval of selective key ranges.
Wide column stores offer high availability and high scalability. They are often used for very large data sets.
Source(s) and further reading: wide column store
Graph database
Abstraction: graph
In a graph database, each node is a record and each arc is a relationship between two nodes. Graph databases are optimized to represent complex relationships with many foreign keys or many-to-many relationships.
Graphs databases offer high performance for data models with complex relationships, such as a social network. They are relatively new and are not yet widely-used; it might be more difficult to find development tools and resources. Many graphs can only be accessed with REST APIs.
相关资源和延伸阅读:图
相关资源和延伸阅读:NoSQL
SQL 还是 NoSQL
Source: Transitioning from RDBMS to NoSQL
选择 SQL 的原因:
- 结构化数据
- 严格的架构
- 关系型数据
- 需要复杂的 joins
- 事务
- 清晰的缩放模式
- 更成熟的开发人员,社区,代码,工具等等
- 通过索引查找非常快
选择 NoSQL 的原因:
- 半结构化数据
- 动态/灵活的模式
- 非关系型数据
- 不需要复杂的 joins 操作
- 可以存储大量 TB/PB 数据
- 非常数据密集的工作量
- 非常高的 IOPS 吞吐量
适合 NoSQL 操作的数据:
- 埋点数据以及日志数据
- 排行榜或者得分数据
- 临时数据,比如购物车
- 需要频繁访问的表
- 元数据/查找表
相关资源和延伸阅读:SQL 还是 NoSQL
缓存
Source: Scalable system design patterns
缓存可以提高页面加载速度,并可以减少服务器和数据库的负载。在这个模型中,分发器先查看请求之前是否被响应过,如果有则将之前的结果直接返回,来省掉真正的处理。
数据库分片均匀分布的读取是最好的。但是热门数据会让读取分布不均匀,这样就会造成瓶颈,如果在数据库前加个缓存,就会抹平不均匀的负载和突发流量对数据库的影响。
客户端缓存
缓存可以位于客户端(操作系统或者浏览器),服务端或者不同的缓存层。
CDN 缓存
CDNs 也被视为一种缓存。
Web 服务器缓存
反向代理和缓存(比如 Varnish)可以直接提供静态和动态内容。Web 服务器同样也可以缓存请求,返回相应结果而不必连接应用服务器。
数据库缓存
数据库的默认配置中通常包含缓存级别,针对一般用例进行了优化。调整配置,在不同情况下使用不同的模式可以进一步提高性能。
应用缓存
基于内存的缓存比如 Memcached 和 Redis 是应用程序和数据存储之间的一种键值存储。由于数据保存在 RAM 中,它比存储在磁盘上的典型数据库要快多了。RAM 比磁盘限制更多,所以例如 least recently used (LRU) 的缓存无效算法可以将「热门数据」放在 RAM 中,而对一些比较「冷门」的数据不做处理。
Redis 有下列附加功能:
- 持久性选项
- 内置数据结构比如有序集合和列表
有多个缓存级别,分为两大类:数据库查询和对象:
- 行级别
- 查询级别
- 完整的可序列化对象
- 完全渲染的 HTML
一般来说,你应该尽量避免基于文件的缓存,因为这使得复制和自动缩放很困难。
数据库查询级别的缓存
当你查询数据库的时候,将查询语句的哈希值与查询结果存储到缓存中。这种方法会遇到以下问题:
- 很难用复杂的查询删除已缓存结果。
- 如果一条数据比如表中某条数据的一项被改变,则需要删除所有可能包含已更改项的缓存结果。
对象级别的缓存
将您的数据视为对象,就像对待你的应用代码一样。让应用程序将数据从数据库中组合到类实例或数据结构中:
- 如果对象的基础数据已经更改了,那么从缓存中删掉这个对象。
- 允许异步处理:workers 通过使用最新的缓存对象来组装对象。
建议缓存的内容:
- 用户会话
- 完全渲染的 Web 页面
- 活动流
- 用户图数据
何时更新缓存
由于你只能在缓存中存储有限的数据,所以你需要选择一个适用于你用例的缓存更新策略。
缓存
Source: From cache to in-memory data grid
应用从存储器读写。缓存不和存储器直接交互,应用执行以下操作:
- 在缓存中查找记录,如果所需数据不在缓存中
- 从数据库中加载所需内容
- 将查找到的结果存储到缓存中
- 返回所需内容
def get_user(self, user_id):
user = cache.get("user.{0}", user_id)
if user is None:
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
if user is not None:
key = "user.{0}".format(user_id)
cache.set(key, json.dumps(user))
return user
Memcached 通常用这种方式使用。
添加到缓存中的数据读取速度很快。缓存模式也称为延迟加载。只缓存所请求的数据,这避免了没有被请求的数据占满了缓存空间。
缓存的缺点:
- 请求的数据如果不在缓存中就需要经过三个步骤来获取数据,这会导致明显的延迟。
- 如果数据库中的数据更新了会导致缓存中的数据过时。这个问题需要通过设置 TTL 强制更新缓存或者直写模式来缓解这种情况。
- 当一个节点出现故障的时候,它将会被一个新的节点替代,这增加了延迟的时间。
直写模式
Source: Scalability, availability, stability, patterns
应用使用缓存作为主要的数据存储,将数据读写到缓存中,而缓存负责从数据库中读写数据。
- 应用向缓存中添加/更新数据
- 缓存同步地写入数据存储
- 返回所需内容
应用代码:
set_user(12345, {"foo":"bar"})
缓存代码:
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
由于存写操作所以直写模式整体是一种很慢的操作,但是读取刚写入的数据很快。相比读取数据,用户通常比较能接受更新数据时速度较慢。缓存中的数据不会过时。
直写模式的缺点:
- 由于故障或者缩放而创建的新的节点,新的节点不会缓存,直到数据库更新为止。缓存应用直写模式可以缓解这个问题。
- 写入的大多数数据可能永远都不会被读取,用 TTL 可以最小化这种情况的出现。
回写模式
Source: Scalability, availability, stability, patterns
在回写模式中,应用执行以下操作:
- 在缓存中增加或者更新条目
- 异步写入数据,提高写入性能。
回写模式的缺点:
- 缓存可能在其内容成功存储之前丢失数据。
- 执行直写模式比缓存或者回写模式更复杂。
刷新
Source: From cache to in-memory data grid
你可以将缓存配置成在到期之前自动刷新最近访问过的内容。
如果缓存可以准确预测将来可能请求哪些数据,那么刷新可能会导致延迟与读取时间的降低。
刷新的缺点:
- 不能准确预测到未来需要用到的数据可能会导致性能不如不使用刷新。
缓存的缺点:
- 需要保持缓存和真实数据源之间的一致性,比如数据库根据缓存无效。
- 需要改变应用程序比如增加 Redis 或者 memcached。
- 无效缓存是个难题,什么时候更新缓存是与之相关的复杂问题。
相关资源和延伸阅读
异步
Source: Intro to architecting systems for scale
异步工作流有助于减少那些原本顺序执行的请求时间。它们可以通过提前进行一些耗时的工作来帮助减少请求时间,比如定期汇总数据。
消息队列
消息队列接收,保留和传递消息。如果按顺序执行操作太慢的话,你可以使用有以下工作流的消息队列:
- 应用程序将作业发布到队列,然后通知用户作业状态
- 一个 worker 从队列中取出该作业,对其进行处理,然后显示该作业完成
不去阻塞用户操作,作业在后台处理。在此期间,客户端可能会进行一些处理使得看上去像是任务已经完成了。例如,如果要发送一条推文,推文可能会马上出现在你的时间线上,但是可能需要一些时间才能将你的推文推送到你的所有关注者那里去。
Redis 是一个令人满意的简单的消息代理,但是消息有可能会丢失。
RabbitMQ 很受欢迎但是要求你适应「AMQP」协议并且管理你自己的节点。
Amazon SQS 是被托管的,但可能具有高延迟,并且消息可能会被传送两次。
任务队列
任务队列接收任务及其相关数据,运行它们,然后传递其结果。 它们可以支持调度,并可用于在后台运行计算密集型作业。
Celery 支持调度,主要是用 Python 开发的。
背压
如果队列开始明显增长,那么队列大小可能会超过内存大小,导致高速缓存未命中,磁盘读取,甚至性能更慢。背压可以通过限制队列大小来帮助我们,从而为队列中的作业保持高吞吐率和良好的响应时间。一旦队列填满,客户端将得到服务器忙活着 HTTP 503 状态码,以便稍后重试。客户端可以在稍后时间重试该请求,也许是指数退避。
异步的缺点:
- 简单的计算和实时工作流等用例可能更适用于同步操作,因为引入队列可能会增加延迟和复杂性。
相关资源和延伸阅读
通讯
超文本传输协议(HTTP)
HTTP 是一种在客户端和服务器之间编码和传输数据的方法。它是一个请求/响应协议:客户端和服务端针对相关内容和完成状态信息的请求和响应。HTTP 是独立的,允许请求和响应流经许多执行负载均衡,缓存,加密和压缩的中间路由器和服务器。
一个基本的 HTTP 请求由一个动词(方法)和一个资源(端点)组成。 以下是常见的 HTTP 动词:
动词 | 描述 | *幂等 | 安全性 | 可缓存 |
---|---|---|---|---|
GET | 读取资源 | Yes | Yes | Yes |
POST | 创建资源或触发处理数据的进程 | No | No | Yes,如果回应包含刷新信息 |
PUT | 创建或替换资源 | Yes | No | No |
PATCH | 部分更新资源 | No | No | Yes,如果回应包含刷新信息 |
DELETE | 删除资源 | Yes | No | No |
*多次执行不会产生不同的结果。 |
HTTP 是依赖于较低级协议(如 TCP 和 UDP)的应用层协议。
传输控制协议(TCP)
Source: How to make a multiplayer game
TCP 是通过 IP 网络的面向连接的协议。 使用握手建立和断开连接。 发送的所有数据包保证以原始顺序到达目的地,用以下措施保证数据包不被损坏:
如果发送者没有收到正确的响应,它将重新发送数据包。如果多次超时,连接就会断开。TCP 实行流量控制和拥塞控制。这些确保措施会导致延迟,而且通常导致传输效率比 UDP 低。
为了确保高吞吐量,Web 服务器可以保持大量的 TCP 连接,从而导致高内存使用。在 Web 服务器线程间拥有大量开放连接可能开销巨大,消耗资源过多,也就是说,一个 memcached 服务器。连接池 可以帮助除了在适用的情况下切换到 UDP。
TCP 对于需要高可靠性但时间紧迫的应用程序很有用。比如包括 Web 服务器,数据库信息,SMTP,FTP 和 SSH。
以下情况使用 TCP 代替 UDP:
- 你需要数据完好无损。
- 你想对网络吞吐量自动进行最佳评估。
User datagram protocol (UDP)
Source: How to make a multiplayer game
UDP is connectionless. Datagrams (analogous to packets) are guaranteed only at the datagram level. Datagrams might reach their destination out of order or not at all. UDP does not support congestion control. Without the guarantees that TCP support, UDP is generally more efficient.
UDP can broadcast, sending datagrams to all devices on the subnet. This is useful with DHCP because the client has not yet received an IP address, thus preventing a way for TCP to stream without the IP address.
UDP is less reliable but works well in real time use cases such as VoIP, video chat, streaming, and realtime multiplayer games.
Use UDP over TCP when:
- You need the lowest latency
- Late data is worse than loss of data
- You want to implement your own error correction
Source(s) and further reading: TCP and UDP
- Networking for game programming
- Key differences between TCP and UDP protocols
- Difference between TCP and UDP
- Transmission control protocol
- User datagram protocol
- Scaling memcache at Facebook
Remote procedure call (RPC)
Source: Crack the system design interview
In an RPC, a client causes a procedure to execute on a different address space, usually a remote server. The procedure is coded as if it were a local procedure call, abstracting away the details of how to communicate with the server from the client program. Remote calls are usually slower and less reliable than local calls so it is helpful to distinguish RPC calls from local calls. Popular RPC frameworks include Protobuf, Thrift, and Avro.
RPC is a request-response protocol:
- Client program - Calls the client stub procedure. The parameters are pushed onto the stack like a local procedure call.
- Client stub procedure - Marshals (packs) procedure id and arguments into a request message.
- Client communication module - OS sends the message from the client to the server.
- Server communication module - OS passes the incoming packets to the server stub procedure.
- Server stub procedure - Unmarshalls the results, calls the server procedure matching the procedure id and passes the given arguments.
- The server response repeats the steps above in reverse order.
Sample RPC calls:
GET /someoperation?data=anId
POST /anotheroperation
{
"data":"anId";
"anotherdata": "another value"
}
RPC is focused on exposing behaviors. RPCs are often used for performance reasons with internal communications, as you can hand-craft native calls to better fit your use cases.
Choose a Native Library aka SDK when:
- You know your target platform.
- You want to control how your "logic" is accessed
- You want to control how error control happens off your library
- Performance and end user experience is your primary concern
HTTP APIs following REST tend to be used more often for public APIs.
Disadvantage(s): RPC
- RPC clients become tightly coupled to the service implementation.
- A new API must be defined for every new operation or use case.
- It can be difficult to debug RPC.
- You might not be able to leverage existing technologies out of the box. For example, it might require additional effort to ensure RPC calls are properly cached on caching servers such as Squid.
Representational state transfer (REST)
REST is an architectural style enforcing a client/server model where the client acts on a set of resources managed by the server. The server provides a representation of resources and actions that can either manipulate or get a new representation of resources. All communication must be stateless and cacheable.
There are four qualities of a RESTful interface:
- Identify resources (URI in HTTP) - use the same URI regardless of any operation.
- Change with representations (Verbs in HTTP) - use verbs, headers, and body.
- Self-descriptive error message (status response in HTTP) - Use status codes, don't reinvent the wheel.
- HATEOAS (HTML interface for HTTP) - your web service should be fully accessible in a browser.
Sample REST calls:
GET /someresources/anId
PUT /someresources/anId
{"anotherdata": "another value"}
REST is focused on exposing data. It minimizes the coupling between client/server and is often used for public HTTP APIs. REST uses a more generic and uniform method of exposing resources through URIs, representation through headers, and actions through verbs such as GET, POST, PUT, DELETE, and PATCH. Being stateless, REST is great for horizontal scaling and partitioning.
Disadvantage(s): REST
- With REST being focused on exposing data, it might not be a good fit if resources are not naturally organized or accessed in a simple hierarchy. For example, returning all updated records from the past hour matching a particular set of events is not easily expressed as a path. With REST, it is likely to be implemented with a combination of URI path, query parameters, and possibly the request body.
- REST typically relies on a few verbs (GET, POST, PUT, DELETE, and PATCH) which sometimes doesn't fit your use case. For example, moving expired documents to the archive folder might not cleanly fit within these verbs.
- Fetching complicated resources with nested hierarchies requires multiple round trips between the client and server to render single views, e.g. fetching content of a blog entry and the comments on that entry. For mobile applications operating in variable network conditions, these multiple roundtrips are highly undesirable.
- Over time, more fields might be added to an API response and older clients will receive all new data fields, even those that they do not need, as a result, it bloats the payload size and leads to larger latencies.
RPC and REST calls comparison
Operation | RPC | REST |
---|---|---|
Signup | POST /signup | POST /persons |
Resign | POST /resign { "personid": "1234" } |
DELETE /persons/1234 |
Read a person | GET /readPerson?personid=1234 | GET /persons/1234 |
Read a person’s items list | GET /readUsersItemsList?personid=1234 | GET /persons/1234/items |
Add an item to a person’s items | POST /addItemToUsersItemsList { "personid": "1234"; "itemid": "456" } |
POST /persons/1234/items { "itemid": "456" } |
Update an item | POST /modifyItem { "itemid": "456"; "key": "value" } |
PUT /items/456 { "key": "value" } |
Delete an item | POST /removeItem { "itemid": "456" } |
DELETE /items/456 |
Source: Do you really know why you prefer REST over RPC
Source(s) and further reading: REST and RPC
- Do you really know why you prefer REST over RPC
- When are RPC-ish approaches more appropriate than REST?
- REST vs JSON-RPC
- Debunking the myths of RPC and REST
- What are the drawbacks of using REST
- Crack the system design interview
- Thrift
- Why REST for internal use and not RPC
Security
This section could use some updates. Consider contributing!
Security is a broad topic. Unless you have considerable experience, a security background, or are applying for a position that requires knowledge of security, you probably won't need to know more than the basics:
- Encrypt in transit and at rest.
- Sanitize all user inputs or any input parameters exposed to user to prevent XSS and SQL injection.
- Use parameterized queries to prevent SQL injection.
- Use the principle of least privilege.
Source(s) and further reading
Appendix
You'll sometimes be asked to do 'back-of-the-envelope' estimates. For example, you might need to determine how long it will take to generate 100 image thumbnails from disk or how much memory a data structure will take. The Powers of two table and Latency numbers every programmer should know are handy references.
Powers of two table
Power Exact Value Approx Value Bytes
---------------------------------------------------------------
7 128
8 256
10 1024 1 thousand 1 KB
16 65,536 64 KB
20 1,048,576 1 million 1 MB
30 1,073,741,824 1 billion 1 GB
32 4,294,967,296 4 GB
40 1,099,511,627,776 1 trillion 1 TB
Source(s) and further reading
Latency numbers every programmer should know
Latency Comparison Numbers
--------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 100 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 10,000 ns 10 us
Send 1 KB bytes over 1 Gbps network 10,000 ns 10 us
Read 4 KB randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from 1 Gbps 10,000,000 ns 10,000 us 10 ms 40x memory, 10X SSD
Read 1 MB sequentially from disk 30,000,000 ns 30,000 us 30 ms 120x memory, 30X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
Notes
-----
1 ns = 10^-9 seconds
1 us = 10^-6 seconds = 1,000 ns
1 ms = 10^-3 seconds = 1,000 us = 1,000,000 ns
Handy metrics based on numbers above:
- Read sequentially from disk at 30 MB/s
- Read sequentially from 1 Gbps Ethernet at 100 MB/s
- Read sequentially from SSD at 1 GB/s
- Read sequentially from main memory at 4 GB/s
- 6-7 world-wide round trips per second
- 2,000 round trips per second within a data center
Latency numbers visualized
Source(s) and further reading
- Latency numbers every programmer should know - 1
- Latency numbers every programmer should know - 2
- Designs, lessons, and advice from building large distributed systems
- Software Engineering Advice from Building Large-Scale Distributed Systems
Additional system design interview questions
Common system design interview questions, with links to resources on how to solve each.
Question | Reference(s) |
---|---|
Design a file sync service like Dropbox | youtube.com |
Design a search engine like Google | queue.acm.org stackexchange.com ardendertat.com stanford.edu |
Design a scalable web crawler like Google | quora.com |
Design Google docs | code.google.com neil.fraser.name |
Design a key-value store like Redis | slideshare.net |
Design a cache system like Memcached | slideshare.net |
Design a recommendation system like Amazon's | hulu.com ijcai13.org |
Design a tinyurl system like Bitly | n00tc0d3r.blogspot.com |
Design a chat app like WhatsApp | highscalability.com |
Design a picture sharing system like Instagram | highscalability.com highscalability.com |
Design the Facebook news feed function | quora.com quora.com slideshare.net |
Design the Facebook timeline function | facebook.com highscalability.com |
Design the Facebook chat function | erlang-factory.com facebook.com |
Design a graph search function like Facebook's | facebook.com facebook.com facebook.com |
Design a content delivery network like CloudFlare | cmu.edu |
Design a trending topic system like Twitter's | michael-noll.com snikolov .wordpress.com |
Design a random ID generation system | blog.twitter.com github.com |
Return the top k requests during a time interval | ucsb.edu wpi.edu |
Design a system that serves data from multiple data centers | highscalability.com |
Design an online multiplayer card game | indieflashblog.com buildnewgames.com |
Design a garbage collection system | stuffwithstuff.com washington.edu |
Add a system design question | Contribute |
Real world architectures
Articles on how real world systems are designed.
Source: Twitter timelines at scale
Don't focus on nitty gritty details for the following articles, instead:
- Identify shared principles, common technologies, and patterns within these articles
- Study what problems are solved by each component, where it works, where it doesn't
- Review the lessons learned
Type | System | Reference(s) |
---|---|---|
Data processing | MapReduce - Distributed data processing from Google | research.google.com |
Data processing | Spark - Distributed data processing from Databricks | slideshare.net |
Data processing | Storm - Distributed data processing from Twitter | slideshare.net |
Data store | Bigtable - Distributed column-oriented database from Google | harvard.edu |
Data store | HBase - Open source implementation of Bigtable | slideshare.net |
Data store | Cassandra - Distributed column-oriented database from Facebook | slideshare.net |
Data store | DynamoDB - Document-oriented database from Amazon | harvard.edu |
Data store | MongoDB - Document-oriented database | slideshare.net |
Data store | Spanner - Globally-distributed database from Google | research.google.com |
Data store | Memcached - Distributed memory caching system | slideshare.net |
Data store | Redis - Distributed memory caching system with persistence and value types | slideshare.net |
File system | Google File System (GFS) - Distributed file system | research.google.com |
File system | Hadoop File System (HDFS) - Open source implementation of GFS | apache.org |
Misc | Chubby - Lock service for loosely-coupled distributed systems from Google | research.google.com |
Misc | Dapper - Distributed systems tracing infrastructure | research.google.com |
Misc | Kafka - Pub/sub message queue from LinkedIn | slideshare.net |
Misc | Zookeeper - Centralized infrastructure and services enabling synchronization | slideshare.net |
Add an architecture | Contribute |
Company architectures
Company engineering blogs
Architectures for companies you are interviewing with.
Questions you encounter might be from the same domain.
- Airbnb Engineering
- Atlassian Developers
- Autodesk Engineering
- AWS Blog
- Bitly Engineering Blog
- Box Blogs
- Cloudera Developer Blog
- Dropbox Tech Blog
- Engineering at Quora
- Ebay Tech Blog
- Evernote Tech Blog
- Etsy Code as Craft
- Facebook Engineering
- Flickr Code
- Foursquare Engineering Blog
- GitHub Engineering Blog
- Google Research Blog
- Groupon Engineering Blog
- Heroku Engineering Blog
- Hubspot Engineering Blog
- High Scalability
- Instagram Engineering
- Intel Software Blog
- Jane Street Tech Blog
- LinkedIn Engineering
- Microsoft Engineering
- Microsoft Python Engineering
- Netflix Tech Blog
- Paypal Developer Blog
- Pinterest Engineering Blog
- Quora Engineering
- Reddit Blog
- Salesforce Engineering Blog
- Slack Engineering Blog
- Spotify Labs
- Twilio Engineering Blog
- Twitter Engineering
- Uber Engineering Blog
- Yahoo Engineering Blog
- Yelp Engineering Blog
- Zynga Engineering Blog
Source(s) and further reading
Under development
Interested in adding a section or helping complete one in-progress? Contribute!
- Distributed computing with MapReduce
- Consistent hashing
- Scatter gather
- Contribute
Credits
Credits and sources are provided throughout this repo.
Special thanks to:
- Hired in tech
- Cracking the coding interview
- High scalability
- checkcheckzz/system-design-interview
- shashank88/system_design
- mmcgrana/services-engineering
- System design cheat sheet
- A distributed systems reading list
- Cracking the system design interview
Contact info
Feel free to contact me to discuss any issues, questions, or comments.
My contact info can be found on my GitHub page.
License
Creative Commons Attribution 4.0 International License (CC BY 4.0)
http://creativecommons.org/licenses/by/4.0/