An aspiring data engineer recently reached out to me for some guidance on pivoting into the field from a software development background. The questions they asked are similar to what others have asked me in the past, so I decided to capture my responses here. I link to prior posts and other resources when possible… Continue Reading
Questions Answered: Parallel Load in Spark Notebook
I received many questions on my tutorial Ingest tables in parallel with an Apache Spark notebook using multithreading. In this video and post I address some of the questions that I couldn’t just answer in the YouTube comments. Watch the video for more complete answers but here are quick responses with links to examples where… Continue Reading
Getting Started with Spark Structured Streaming – Current 22
I am honored to speak at Current 22. The example notebook that I walk through towards the end is available at https://github.com/datakickstart/datakickstart-databricks-workspace/blob/main/stackoverflow/stackoverflow_streaming.py.
Snowflake on Azure – Load with Synapse Pipeline
If you choose to use Snowflake along with Azure for your data platform, you will have to make choices on how to load the data. Landing processed data into your data lake on Azure Data Lake Storage Gen2 (ADLS) is the first step that I recommend in most environments. I like this pattern because then… Continue Reading
Snowflake Certification (SnowPro Core) study tips
This post is to provide some tips and references for anyone studying for the SnowPro certification.
Snowflake on Azure – Load with COPY INTO
In this tutorial we cover some basic but realistic examples of loading from CSV or Parquet files. The source data is in partitioned folders following a pattern of puYear=#### and puMonth=##, but we do not use the partition columns until the last example.
Snowflake on Azure – Create External Stage
Snowflake, like similar analytic databases, has a fast way to load data from files. The COPY command can quickly read files and append the records to a table. It does this by reading from an external stage which points to a cloud storage location. This currently supports Azure Storage, Amazon S3, and Google Cloud Storage.… Continue Reading
Monitor Synapse Spark with Log Analytics
Log Analytics provides a way to easily query Spark logs and setup alerts in Azure. This provides a huge help when monitoring Apache Spark. In this video I walk through the setup steps and quick demo of this capability for the Azure Synapse Spark log4j output. I include written instructions and troubleshooting guidance in this… Continue Reading
Ingest tables in parallel with an Apache Spark notebook using multithreading
If we want to kick off a single Apache Spark notebook to process a list of tables we can write the code easily. The simple code to loop through the list of tables ends up running one table after another (sequentially). If none of these tables are very big, it is quicker to have Spark load tables concurrently (in parallel) using threads. There are some different options of how to do this, but I am sharing the easiest way I have found when working with a notebook in Databricks, Azure Synapse Spark, Jupyter, or Zeppelin.
Run SQL Server locally on Docker
I recently came across the need for a locally running SQL Server instance so that I could attach a database and deploy to Azure SQL. The windows 10 laptop I am using does not having SQL Server Developer edition installed yet, so I decided to set it up using Docker. What I like about using… Continue Reading