ETL Pipeline Implementation on AzureThis document outlines the creation of an end-to-end ETL pipeline on Microsoft Azure, utilizing Azure Data Factory for orchestration, Azure Databricks for transformation, Azure Data Lake Storage Gen2 for storage, Azure Synapse Analytics for data warehousing, and Power BI for visualization. The pipeline is designed to be scalable, secure, and efficient,... Continue Reading →
Processing 10 TB of Data in Databricks!!
Interviewer: Let's assume you're processing 10 TB of data in Databricks. How would you configure the cluster to optimize performance?Candidate: To process 10 TB of data efficiently, I would recommend a cluster configuration with a large number of nodes and sufficient memory.First, I would estimate the number of partitions required to process the data in... Continue Reading →
Low Level System design articles
These articles will save you 50+ hours of hopping to resources and wasting time. 1) Scalability: https://lnkd.in/gq4hW9qx 2) Horizontal vs Vertical Scaling: https://lnkd.in/g8qcwRCy 3) Latency vs Throughput: https://lnkd.in/gDAx6QQd 4) Load Balancing: https://lnkd.in/gefSiXEJ 5) Caching: https://lnkd.in/gAp-9udf 6) ACID Transactions: https://lnkd.in/g-sjsMwX 7) SQL vs NoSQL: https://lnkd.in/gwCe58TU 8) Database Indexes: https://lnkd.in/gE_q5m_g 9) Database Sharding: https://lnkd.in/gFdNxDrU 10) Content Delivery... Continue Reading →
What is Surrogate keys and how can we handle during data warehouse migration?
What is surrogate key? Surrogate key is nothing but unique identifier assigned to each row in a dimension table. Isn’t simple? Yes. For one, this might raise few questions, because what about primary key, its also unique in nature and assigned to each row. Then, how it differs from primary key of a table, what... Continue Reading →