Ascend Use Case
ETL for API Data
Out-of-the-box connectors are great… when they exist and are maintained by the vendor for your data source. When they don’t exist, you’re stuck with the tedious task of building and maintaining an API connector codebase and deployment end-to-end. In contrast, the custom connector framework on Ascend allows you to focus just on the parts that you need from the API, such as fetching, filtering, and transforming. Ascend automates the processing, infrastructure, and maintenance of all the corner cases of API operation: parsing, scheduling, retry/error handling, and loading the data into pipelines as well as onto its end destination (data warehouse, blob store, etc).
5x Faster
Build a custom connector 5x faster. No pre-build connector? No problem.
Less Time Maintaining
and troubleshooting custom connectors
Historical Data
Preserve copies of historical data no longer available in the API.
How it Works
- Ingest raw data from any API by specifying your data source(s) via our no-code UI, low-code YAML spec, or full-code API & SDKs
- Unify and join your API data with other datasets from your data lake, warehouse, and elsewhere with simple transforms in SQL, Python, Java, or Scala
- Write the resulting data directly to your favorite database, warehouse, blob store, and more.
- Ascend handles the rest — orchestrating runtimes, handling retries/errors, managing parallel jobs, and persisting copies of your data.
- Runtime Orchestration: Ascend automatically provisions all the infrastructure, runs your code, and provides the inline interfaces to process all records through. The system persists raw data and parses records per your logic; if you change the parsing function later, it reprocesses without having to refetch from the API.
- Managing Parallel Jobs: Ascend automatically fetches data from APIs in a fully parallelized job architecture with no additional code. Common errors are automatically retried on a per job basis and if they continue to fail, are escalated to users through rich notification management features.
- Transform & Enrich Data: The API data ingestion functions sit inside a fully featured transformation context. This means you can leverage Ascend’s powerful transformation capabilities built on Apache Spark, and in a variety of languages including SQL, Python, Java, and Scala. In addition to fueling data pipelines on the Ascend platform, you can take any part of the parsed data and write it out to any number of other locations.