Update README.md files across various system design solutions to clarify to ask about expected amount, style, and purpose of code to be written during interviews.
parent
f636ce9d92
commit
92a86f1d64
|
@ -94,7 +94,7 @@ We could store info on the 10 million users in a [relational database](https://g
|
|||
* The **Web Server** forwards the request to the **Accounts API** server
|
||||
* The **Accounts API** server updates the **SQL Database** `accounts` table with the newly entered account info
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
The `accounts` table could have the following structure:
|
||||
|
||||
|
@ -180,7 +180,7 @@ We'll create an [index](https://github.com/ido777/system-design-primer-update#us
|
|||
|
||||
For the **Category Service**, we can seed a seller-to-category dictionary with the most popular sellers. If we estimate 50,000 sellers and estimate each entry to take less than 255 bytes, the dictionary would only take about 12 MB of memory.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
```python
|
||||
class DefaultCategories(Enum):
|
||||
|
@ -263,7 +263,7 @@ Running analyses on the transaction files could significantly reduce the load on
|
|||
|
||||
We could call the **Budget Service** to re-run the analysis if the user updates a category.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
Sample log file format, tab delimited:
|
||||
|
||||
|
|
|
@ -104,7 +104,7 @@ An alternative to a relational database acting as a large hash table, we could u
|
|||
* Saves the paste data to the **Object Store**
|
||||
* Returns the url
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
The `pastes` table could have the following structure:
|
||||
|
||||
|
@ -192,7 +192,7 @@ Response:
|
|||
|
||||
Since realtime analytics are not a requirement, we could simply **MapReduce** the **Web Server** logs to generate hit counts.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
```python
|
||||
class HitCounts(MRJob):
|
||||
|
|
|
@ -93,7 +93,7 @@ Since the cache has limited capacity, we'll use a least recently used (LRU) appr
|
|||
|
||||
The cache can use a doubly-linked list: new items will be added to the head while items to expire will be removed from the tail. We'll use a hash table for fast lookups to each linked list node.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
**Query API Server** implementation:
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ Handy conversion guide:
|
|||
|
||||
We could store the raw **Sales API** server log files on a managed **Object Store** such as Amazon S3, rather than managing our own distributed file system.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
We'll assume this is a sample log entry, tab delimited:
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ Handy conversion guide:
|
|||
|
||||
### Use case: User searches for someone and sees the shortest path to the searched person
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
Without the constraint of millions of users (vertices) and billions of friend relationships (edges), we could solve this unweighted shortest path task with a general BFS approach:
|
||||
|
||||
|
|
|
@ -119,7 +119,7 @@ We could store media such as photos or videos on an **Object Store**.
|
|||
* Uses the **Notification Service** to send out push notifications to followers:
|
||||
* Uses a **Queue** (not pictured) to asynchronously send out notifications
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
If our **Memory Cache** is Redis, we could use a native Redis list with the following structure:
|
||||
|
||||
|
|
|
@ -96,7 +96,7 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab
|
|||
* Removes the link from `links_to_crawl` in the **NoSQL Database**
|
||||
* Inserts the page link and signature to `crawled_links` in the **NoSQL Database**
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
`PagesDataStore` is an abstraction within the **Crawler Service** that uses the **NoSQL Database**:
|
||||
|
||||
|
@ -180,7 +180,7 @@ class Crawler(object):
|
|||
|
||||
We need to be careful the web crawler doesn't get stuck in an infinite loop, which happens when the graph contains a cycle.
|
||||
|
||||
**Clarify with your interviewer how much code you are expected to write**.
|
||||
**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**.
|
||||
|
||||
We'll want to remove duplicate urls:
|
||||
|
||||
|
|
Loading…
Reference in New Issue