The MATLAB and Simulink on Databricks Reference Architecture enables users to work with data where it lives, combining domain specific modeling with Databricks security and scalable compute.
The MATLAB Interface for Databricks enables engineers and data scientists to develop, scale, and deploy MATLAB and Simulink workflows directly on Databricks, operationalizing algorithms and simulations in production pipelines and digital twins.
- Run MATLAB and Simulink directly in Databricks
- Use SQL with MATLAB and Databricks
- Develop models in MATLAB and Simulink and scale with Spark™
- Work with Spark interactively in MATLAB
- Share MATLAB and Simulink algorithms and models with Python and SQL users
- Deploy Simulink models in Databricks as digital twins
Interactive Development and Prototyping
MATLAB and Simulink on Databricks
The MATLAB and Simulink on Databricks Reference Architecture brings interactive compute to the data, enabling teams to develop and simulate using data securely within Databricks.
Access Big Data with SQL
Access cloud data sources through a Databricks cluster by connecting it to MATLAB with Database Toolbox™. Manipulate data remotely and use SQL to access tabular data on Databricks. Use Apache Spark™ SQL to query data in Spark workflows.
Programmatically Interact with Databricks
The REST API enables the MATLAB user to programmatically interact with the Databricks environment from MATLAB. For example to control jobs or clusters within Databricks.
Production Deployment with MATLAB and Simulink
Production Deployment with MATLAB and Simulink in Databricks
Package and deploy your algorithms and models to Databricks clusters using MATLAB Compiler SDK and Simulink Compiler with MATLAB Runtime. Run them on-demand in notebooks, schedule them as jobs, integrate them into data pipelines, and power digital twins.