Load data from CSV file into Trino Table

To create a table in Trino and load data from a CSV file stored in Azure Data Lake Storage (ADLS), you’ll use Trino’s Hive connector to register the CSV file as a table. The Hive connector, backed by a Hive metastore, allows Trino to query files in ADLS. Below is a step-by-step guide to achieve... Continue Reading →

How to reflect data in trino catalog table using parquet file generated from databricks

To reflect data in a Trino catalog table using a Parquet file stored in an **Azure Blob Storage** container (generated from Databricks), follow these steps:1. **Generate Parquet File in Databricks**:   - In Databricks, write your data to a Parquet file stored in an Azure Blob Storage container. Use the `abfss` protocol for Azure Data Lake... Continue Reading →

Create a website or blog at WordPress.com

Up ↑

Design a site like this with WordPress.com
Get started