To create a table in Trino and load data from a CSV file stored in Azure Data Lake Storage (ADLS), you’ll use Trino’s Hive connector to register the CSV file as a table. The Hive connector, backed by a Hive metastore, allows Trino to query files in ADLS. Below is a step-by-step guide to achieve... Continue Reading →
How to reflect data in trino catalog table using parquet file generated from databricks
To reflect data in a Trino catalog table using a Parquet file stored in an **Azure Blob Storage** container (generated from Databricks), follow these steps:1. **Generate Parquet File in Databricks**: - In Databricks, write your data to a Parquet file stored in an Azure Blob Storage container. Use the `abfss` protocol for Azure Data Lake... Continue Reading →
How to connect trino database with azure datalake to generate parquet file from trino?
To connect Trino with Azure Data Lake Storage (ADLS) Gen2 and generate Parquet files from Trino queries, you need to configure Trino to access ADLS Gen2 using the Hive or Delta Lake connector, set up authentication, and use SQL statements to write query results as Parquet files. Below is a step-by-step guide based on the... Continue Reading →