diff --git a/solutions/system_design/mint/README.md b/solutions/system_design/mint/README.md index 215ee7e9..513a406e 100644 --- a/solutions/system_design/mint/README.md +++ b/solutions/system_design/mint/README.md @@ -94,7 +94,7 @@ We could store info on the 10 million users in a [relational database](https://g * The **Web Server** forwards the request to the **Accounts API** server * The **Accounts API** server updates the **SQL Database** `accounts` table with the newly entered account info -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. The `accounts` table could have the following structure: @@ -180,7 +180,7 @@ We'll create an [index](https://github.com/ido777/system-design-primer-update#us For the **Category Service**, we can seed a seller-to-category dictionary with the most popular sellers. If we estimate 50,000 sellers and estimate each entry to take less than 255 bytes, the dictionary would only take about 12 MB of memory. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. ```python class DefaultCategories(Enum): @@ -263,7 +263,7 @@ Running analyses on the transaction files could significantly reduce the load on We could call the **Budget Service** to re-run the analysis if the user updates a category. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. Sample log file format, tab delimited: diff --git a/solutions/system_design/pastebin/README.md b/solutions/system_design/pastebin/README.md index 06268c30..72bc40cb 100644 --- a/solutions/system_design/pastebin/README.md +++ b/solutions/system_design/pastebin/README.md @@ -104,7 +104,7 @@ An alternative to a relational database acting as a large hash table, we could u * Saves the paste data to the **Object Store** * Returns the url -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. The `pastes` table could have the following structure: @@ -192,7 +192,7 @@ Response: Since realtime analytics are not a requirement, we could simply **MapReduce** the **Web Server** logs to generate hit counts. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. ```python class HitCounts(MRJob): diff --git a/solutions/system_design/query_cache/README.md b/solutions/system_design/query_cache/README.md index 7f6ae936..3f66751b 100644 --- a/solutions/system_design/query_cache/README.md +++ b/solutions/system_design/query_cache/README.md @@ -93,7 +93,7 @@ Since the cache has limited capacity, we'll use a least recently used (LRU) appr The cache can use a doubly-linked list: new items will be added to the head while items to expire will be removed from the tail. We'll use a hash table for fast lookups to each linked list node. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. **Query API Server** implementation: diff --git a/solutions/system_design/sales_rank/README.md b/solutions/system_design/sales_rank/README.md index 58a30e8f..57e326cb 100644 --- a/solutions/system_design/sales_rank/README.md +++ b/solutions/system_design/sales_rank/README.md @@ -80,7 +80,7 @@ Handy conversion guide: We could store the raw **Sales API** server log files on a managed **Object Store** such as Amazon S3, rather than managing our own distributed file system. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. We'll assume this is a sample log entry, tab delimited: diff --git a/solutions/system_design/social_graph/README.md b/solutions/system_design/social_graph/README.md index f68a3688..14200717 100644 --- a/solutions/system_design/social_graph/README.md +++ b/solutions/system_design/social_graph/README.md @@ -58,7 +58,7 @@ Handy conversion guide: ### Use case: User searches for someone and sees the shortest path to the searched person -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. Without the constraint of millions of users (vertices) and billions of friend relationships (edges), we could solve this unweighted shortest path task with a general BFS approach: diff --git a/solutions/system_design/twitter/README.md b/solutions/system_design/twitter/README.md index 08ba68bf..e4d24ad4 100644 --- a/solutions/system_design/twitter/README.md +++ b/solutions/system_design/twitter/README.md @@ -119,7 +119,7 @@ We could store media such as photos or videos on an **Object Store**. * Uses the **Notification Service** to send out push notifications to followers: * Uses a **Queue** (not pictured) to asynchronously send out notifications -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. If our **Memory Cache** is Redis, we could use a native Redis list with the following structure: diff --git a/solutions/system_design/web_crawler/README.md b/solutions/system_design/web_crawler/README.md index c8200dc1..c558a2a3 100644 --- a/solutions/system_design/web_crawler/README.md +++ b/solutions/system_design/web_crawler/README.md @@ -96,7 +96,7 @@ We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Datab * Removes the link from `links_to_crawl` in the **NoSQL Database** * Inserts the page link and signature to `crawled_links` in the **NoSQL Database** -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. `PagesDataStore` is an abstraction within the **Crawler Service** that uses the **NoSQL Database**: @@ -180,7 +180,7 @@ class Crawler(object): We need to be careful the web crawler doesn't get stuck in an infinite loop, which happens when the graph contains a cycle. -**Clarify with your interviewer how much code you are expected to write**. +**Clarify with your interviewer the expected amount, style, and purpose of the code you should write**. We'll want to remove duplicate urls: