New Feature: Dataflow JDBC/ODBC Connector
This feature leverages the same intelligent persistence layer that backs Queryable Dataflows and Structured Data Lakes, and joins it (pun intended) with the SparkSQL Thrift JDBC/ODBC Server to provide the ability to directly access and query your Dataflows from your favorite environment, whether it is a BI tool like Looker, or your favorite SQL workbench.
How-to: Redshift Data Ingest & ETL with Ascend.io
This How-to will provide you with an overview of how to ingest data into Redshift by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.
New Feature: Scala & Java Transforms
Today we’re excited to formally announce support for Scala & Java transforms. Not only does this expand our support to two of the most popular languages amongst data engineers, but marries this capability with the advanced orchestration and optimizations provided by Ascend.
How-to: Snowflake Data Ingest & ETL with Ascend.io
This How-to will provide you with an overview of how to ingest data into Snowflake by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.
New Feature: Credentials Vault
Credentials Vault is a centralized place to store and manage secrets used by your dataflows. The feature makes it even easier to collaborate with others to quickly ingest from, and write to external data systems. It also empowers site administrators with an interface to audit and control all credentials in use by the Ascend platform.
New Feature: Recursive Transforms
A recursive transform is a transform that uses the output of its previous run as an input into the next run. This pattern is often used to incrementally aggregate data. In cases where historical data is substantially larger than the aggregated data, this pattern can result in significant reduction in processing time and compute resources.
Data Lake ETL Tutorial: Transforming Data
This tutorial will give you an overview of the “T” in ETL, namely, how to start transforming your data before you load it into the final destination. We will use SQL in this example, but Ascend also supports Python/PySpark and Scala/Java transformations as well.
Data Lake ETL Tutorial: Using Ascend No- and Low-Code Connectors to Load Data
Now that we’ve extracted some data from S3, cleaned it up using a SQL transform, we can start on the “L” of ETL and write our data back out to our data lake. Follow this guide to learn how.
Azure Beta or: How I Learned to Stop Worrying and Love the Kube
We run everything on Kubernetes and manage system state in MySQL plus a large scale blob store (Google Storage/S3/Azure Blob Storage).
How HNI Drives Manufacturing Digital Transformation with Data Pipelines
HNI Corporation is a US$2.2B workplace furnishings and hearth provider, with 19 distinct brands and global manufacturing sites. HNI is driving digital transformation of their business with data. Read below on how their decision sciences team continually improves business operations with the use of data, analytics, and machine learning.
New: support for 75 more SQL functions
We’re announcing the addition of 75 new SQL functions that are now available in every customer environment by default. This includes powerful new functions like BOOL_AND, COUNT_IF, FIND_IN_SET, MAX_BY, WEEKDAY, and well, 70 more!!!
Queryable Dataflows: Combining the Interactivity of Warehouses with the Scale of Pipelines
Now in Ascend, all stages of all Dataflows are queryable without switching tools or disrupting your development process. As part of this capability, we’ve built an interactive query editor that lets you interact with Connectors and Transforms in Dataflows, as well as Data Feeds broadcasting from other Data Services, as though they are read-only tables in a SQL database.