Databricks job clusters are stricter than interactive clusters
While converting a list to a dataframe, I got a type error, but only on a job cluster, not on an interactive cluster. How to get around that and why does it happen?
While converting a list to a dataframe, I got a type error, but only on a job cluster, not on an interactive cluster. How to get around that and why does it happen?
Deleting a user in Databricks might seem harmless—until workflows start failing, SQL queries break, and ownership chaos unfolds. In this post, I share a hard-learned lesson about Databricks ownership, how to prevent disruptions, and what to do if you’ve already made the mistake. Learn best practices for managing SQL objects, workflows, and user permissions to avoid unexpected failures. Because when it comes to user deletion in Databricks, thinking twice can save you from a major headache.
Exporting data to a CSV file in Databricks can sometimes result in multiple files, odd filenames, and unnecessary metadata—issues that aren’t ideal when sharing data externally. This guide explores two practical solutions: using Pandas for small datasets and leveraging Spark’s coalesce to consolidate partitions into a single, clean file. Learn how to choose the right approach for your use case and ensure your CSV exports are efficient, shareable, and hassle-free.
Exploring the Databricks Debugger: Writing flawless code on the first try is a dream, but debugging is a reality for most developers. In this post, I dive into the new Databricks code cell debugger, sharing my first impressions and tips for getting started with this powerful tool.
System tables on Databricks can help us monitor and manage our Data Warehouse. In this post I’ll show how to enable them and how to install the Jobs Dashboard based on system tables.
Data Engineering / Databricks / Fabric / Python / SQL
Cleaning data is a very common task for data professionals. In this post, I demonstrate a few common data cleaning task with spark Python and SQL.
As Data Engineers we need to monitor usage and costs of our data solutions. Databricks lately released tools to help use do that: the Account Usage Dashboard and Budgets. Both based on the “Billing” system schema.
How using SQL Windows functions with non unique order column can cause indeterminate results
Databricks recently added a for-each task to their workflow capability. How does it work and what are its limitations?
Cloning tables in Databricks is a fast way to create replicated data for test proposes, or archiving. Explore the different types of table cloning, each with its pros and cons.
Excel is one of the most common data file formats, and, as data engineers, we are required to read data from it on almost every project. Working in Databricks, you can read and write Excel files, but you need to pay attention to some pitfalls.
More