Tag Archive scala

ByNigel Meakins

Beyond Databricks Notebook Development

This entry is part [part not set] of 6 in the series Development on Databricks

In this post we’ll be looking at why you may want to consider alternatives to databricks notebooks for development and what it means for teams starting out on the unified analytics platform. We’ll go through some of the common areas of difficulty with notebook development on databricks and how they compare with using an IDE. This post is based on my own experiences of implementing projects with clients on the platform, and as such should be taken as my own opinion on the matter. Something hopefully to take on board for consideration rather than dogma, mantra, gospel or any other theologically aligned thinking on the matter. Ultimately things depend on your own needs and circumstances, and hopefully this article will help with awareness of other ways of working.

Developing with Notebooks

Why Use Notebooks?

If you’ve done any development on databricks you’ve probably used databricks notebooks. They are the first point of call for many of us working with data on the platform. Very popular with Data Scientists in particular, they provide an environment whereby data transformation, wrangling, munging or whatever your preferred terms for generally ‘doing stuff’ with data, and visualising the results, is readily accessible. There is no software to install, no real configuration and a generally gentle learning curve to getting working. The team can easily collaborate by creating their notebooks in workspaces and progress on their data journeys unhindered. Based on Jupyter notebooks, they offer great functionality and compatibility with a well established platform widely in use across Data Science teams. For some ways of working, such as data exploration, quick analysis exercises and so forth they are very appealing. Many very mature data teams never really leave the notebook environment, finding all they need in this native offering.

Another popular use for notebooks is to quickly ‘proof out’ something before moving onto more in-depth development work. The ease with which you can execute and visualise results makes databricks notebooks ideal for these rapid turn-around development tasks.

Databricks Developer Productivity

Before I find myself cast as the pantomime villain, just to be clear, there is absolutely nothing ‘bad’ about development with just notebooks, after all they are perfectly capable of implementing code across all aspects of the databricks analytics platform. They were however, at least as far as I can tell, not really intended to serve as fully fledged complex development environments.

As many teams have discovered, when you get into the more involved data engineering activities, working with code that would benefit from improved structuring, visibility and reuse for example, you may find yourself wanting to revisit this development approach. From what we’ve experienced, there is a definite tipping point beyond which working in this fashion has considerable impact on developer productivity in these settings. When trying to exercise any of the well established modern development practices that are second nature to most developers, such as breaking down the code into encapsulated modules, refactoring elements of the code, and writing unit tests to accompany your code, notebooks-based development poses a lot of challenges. Any of these aspects of development that are taken for granted in a respectable Integrated Development Environment (IDE) are not readily available to your notebook endeavours. As things progress further you’ll inevitably find maintaining and expanding on your efforts to be increasingly difficult and productivity is likely to suffer considerably as a result.

Embracing the IDE

Okay, enough doom mongering for one lifetime. The simple solution when you’re struggling with one tool is of course to change to one better suited for the job at hand. For complex data engineering or analytical developments on databricks, this means pretty much the same as with any substantial code effort. Using an IDE.

Pycharm Intellij IDEA Visual Studio Code IDE Icons

Whenever we see clients that are set on the attractive simplicities of notebook-based development, we always make them aware of the alternative ways of working that IDEs offer. This allows them to best understand the overheads and feasibility of each approach and make informed decision as to which will best suit their needs. After all, in the short term notebook-based development may be fine, but looking to what will be required as time and demands progress to the next level of development complexity is what will determine the best approach.

A Quick Comparison: Notebooks vs IDEs

Here’s some of the development experiences that have a noticeable difference across the two ways of working.

Code Structure and Navigation

IDEs

In Python, Scala, Java and just about any other language you’ll be used to structuring your code into libraries, packages and modules or their equivalents, with best practices for how to structure the files. You’ll be familiar with distribution of these logical units and referencing from within your code using import statements. Navigating your code in the IDE is straight forward, structured into projects, and assisted with search and ‘Go To Definition’ functionality for all your code. You can quickly bounce around from one block of code to another in another file somewhere like an over-caffeinated tech-evangelist tipsy on the latest coolaid.

Notebooks

As part of encapsulating your development code, you may have created various databricks notebooks, perhaps grouping up functions and classes like in standard modules in Python. You might even put these in a hierarchy of folder in order to structure them better, like you would do with Python packages and modules. This does of course help prevent duplication of code and aid with understanding and reuse and should help make your notebook-based development easier to manage.

Note however that you won’t have a true Library\Package\Module hierarchy, as the folder structure doesn’t enforce any concept of scope to your code definitions. Any functions or classes that are defined in one notebook will clash with same named definitions in other notebooks regardless of folder structure, unlike how Python or Scala code is structured.

The notebook navigation experience in the workspace isn’t the best however, and you’ll find yourself opening a lot of tabs to navigate between different notebooks as you refer to various elements of your code. The UI isn’t really intended for too much to-ing and -fro-ing between notebooks, preferring a more ‘all in one place’ way of working. You may start to get a little dizzy as you find your way around the various browser tabs of your code.

Referencing Your Code

IDEs

Not much to say here really, you structure your code into projects, packages and modules and add the respective import statements. If you want to refactor files, move things around or whatever, the IDE generally takes care of ensuring that everything still lines up nicely.

Notebooks

In order to make use of your nicely structured notebook development, you’ll need to make sure it has been loaded into your spark session, by running the required notebooks on which your databricks code depends. You’ll soon find yourself having to maintain rather ugly notebooks that are wrappers for calling all these required other notebooks, with lots of %run magic statements such as the ‘run_all’ notebook below:

and then make sure that you call this notebook wherever you need to make use of the code, as shown with the Python code below:

This can introduce some development pains. Neglecting to add the required %run statements for loading the new elements of code is easily done, resulting in calls to undefined code. Additionally, should you decide to restructure your code through some attempts at refactoring you’ll find yourself having to rehash a load of paths in these wrappers. It is easy to introduce bugs and harder to improve your code as a result, both of which will impact your notebooks development productivity.

Referencing other Code

Libraries

In both databricks notebooks and IDE projects development you can of course reference libraries in the standard fashion for the language in question. Again we use an import statement approach, ensuring that the libraries in question are made available on the cluster executing the code or within the session in question.

I’ll talk later in the series on different ways of making these libraries available to your executing code, as there are some differences around managing the impact of updates to the libraries.

Testing Your Code

IDEs

Every modern IDE has some degree of integration with unit testing frameworks built in, either in the core product or through various plugins/extensions. You can often right-click on an element of your code and ‘Generate Unit Tests’ in order to quickly sketch out some tests and get started on ensuring things actually work. You may be following Test-Driven Development (TDD) and working very tightly between code and validation/verification, and your IDE will try to make this a generally happy experience for all involved. Executing your tests either in isolation or as part of a suite is simple, efficient and transparent. You can find out more on IDE testing functionality for PyCharm here, JetBrains IDEA Scala here, and for Visual Studio Code python testing here. I’ll be going into testing in a number of later posts in the series.

Notebooks

In notebooks development on databricks you’ll be preferably grouping suites into the same notebook and then executing either the whole notebook or individual cells in the case of specific test cases, and don’t have this functionality available. You can of course reference unit testing frameworks such as PyTest or ScalaTest and crack on with implementing tests, but the ease with which you can exercise and get feedback on these is limited. Depending on how much you embrace testing as part of your work (and the general advise is to make it at least a big enough part to avoid embarrassment or the QA firing squad) you will find this may have a real impact.

Debugging

IDEs

In the IDE world of course we have nice debuggers that allow stepping through, over, around, behind your code like Donnie Yen in an Ip Man movie when you need to figure out why your code has coughed up the odd fur ball. Variables are available for inspection that point you happily along the way to bug squishing and all is happy once more. These tools are pretty indispensable and you soon see why.

Notebooks

In databricks notebooks you run your cell, check the results and hopefully it did what you expected or if not and you’re lucky it is obvious where the development went belly up. However, there are times when you really really need to see what is happening on each line, and where that bug is creeping in. Not being able to step into your code leaves you splattering your notebook development efforts with print() statements and the like in an attempt to play hunt the gremlin. Then at some point when you find the foible you then go back and remove all those print() splatters and finally get back to coding. Not great.

Other Wholegrain IDE Goodness

I won’t go into all the other advantages of using an IDE as we’ve all got better things to be doing. Suffice to say IDEs such as PyCharm, Intellij IDEA, VS Code and the like are generally crammed with a trove of productivity enhancing tools and functionality that will make your development a different league. With the community editions having such fantastic features you won’t need to shell out a penny, though for some the non-gratis editions may of course have somethings additional to offer worth the price tag.

Functionality Summary

Here’s a quick summary of the above points:

Functionality
  IDE  
  Notebook  
Data Visualisation
Rapid Prototyping
Ad-hoc Analysis and Exploration
Code Structure and Navigation
Referencing Your Code
Testing
Debugging
Code Editing Productivity
    Some limitations      Complete      Advanced

 

Libraries with IDEs, Analysis and Visualisation with Notebooks

No-one said it has to be either/or of course. For many teams, the IDE is the tool of choice for the more complex library code development, with the productivity gains that IDEs offer. The notebook is there to tap in to these libraries, providing a development interface for further manipulating, exploring and visualising the resultant datasets. Each approach plays to its strengths.

Moving from Databricks Notebooks Development to IDE Projects

What to do if you’ve dived in and have a whole slew of notebooks that are your backbone of your data integration and analytics efforts and are struggling with the development experience? Do not despair! It doesn’t have to descend into a frustrating, opaque, unmanageable quagmire of notebook nastiness. It’s generally not difficult to convert the notebook code to scripts. You can automate the exporting of your notebooks’ source code from workspaces using the databricks CLI workspace export_dir command. This will recursively export the notebooks within the specified folder, into .py, .scala, .sql or .r files based on the notebook language of choice.

# export all files recursively to the destination folder.
databricks workspace export_dir /Shared/lib1 D:\tmp\shared\lib1

You will find certain cells using magics will need to be revisited, but if most of your code is based on the various Spark APIs this won’t play a big part.

Summing Up

From my experience of having worked on projects that are notebooks-based/heavy and on projects that are more IDE based, for me the IDE approach for data engineering is the way to go. When it comes to crafting code, a good ‘fat client’  IDE running on your workstation will make a massive difference, with all the responsiveness and functionality you need right there at your fingertips. Of course this is just my opinion on the matter.

We’ll be going through some of the most common aspects of developing on databricks throughout this series, so that you can see for yourself how to really get to grips with working on this fantastic platform. I hope this post has been of some use in deciding whether to opt for the IDE or remain with notebook-based development for your particular situation.

Thanks for reading and see you in the next post on Development on Databricks.