Read

Engineering

Spark Scala - Read JSON File

JSON remains the lingua franca of data interchange. APIs return it, logging systems emit it, and configuration files use it. When you’re building data pipelines with Apache Spark, you’ll inevitably…

Read more →
Engineering

Spark Scala - Read CSV File

CSV files refuse to die. Despite the rise of Parquet, ORC, and Avro, you’ll still encounter CSV in nearly every data engineering project. Legacy systems export it. Business users create it in Excel….

Read more →
Scala

Scala - Read CSV File

For simple CSV files without complex quoting or escaping, Scala’s standard library provides sufficient functionality. Use scala.io.Source to read files line by line and split on delimiters.

Read more →
R

R - Read Fixed-Width File

Fixed-width files allocate specific character positions for each field. Unlike CSV files that use delimiters, these files rely on consistent positioning. A record might look like this:

Read more →
R

R - Read from Database (DBI/RSQLite)

The DBI (Database Interface) package provides a standardized way to interact with databases in R. RSQLite implements this interface for SQLite databases, offering a zero-configuration option that…

Read more →
R

R - Read from URL/Web

Base R handles simple URL reading through readLines() and url() connections. This works for plain text, CSV files, and basic HTTP requests without authentication.

Read more →
Python

PySpark - Read Delta Lake Table

Reading a Delta Lake table in PySpark requires minimal configuration. The Delta Lake format is built on top of Parquet files with a transaction log, making it straightforward to query.

Read more →
Python

PySpark - Read from Hive Table

Before reading from Hive tables, configure your SparkSession to connect with the Hive metastore. The metastore contains metadata about tables, schemas, partitions, and storage locations.

Read more →
Python

PySpark - Read from JDBC/Database

• PySpark’s JDBC connector enables distributed reading from relational databases with automatic partitioning across executors, but requires careful configuration of partition columns and bounds to…

Read more →
Pandas

Pandas - Read from S3 Bucket

• Pandas integrates seamlessly with S3 through the s3fs library, allowing you to read files directly using standard read_csv(), read_parquet(), and other I/O functions with S3 URLs

Read more →
Python

How to Read Parquet Files in Polars

Parquet has become the de facto standard for analytical data storage. Its columnar format, efficient compression, and schema preservation make it ideal for data engineering workflows. But the tool…

Read more →
Pandas

How to Read CSV Files in Pandas

CSV files remain the lingua franca of data exchange. Despite the rise of Parquet, JSON, and database connections, you’ll encounter CSVs constantly—from client exports to API downloads to legacy…

Read more →
Python

How to Read CSV Files in Polars

Polars has rapidly become the go-to DataFrame library for Python developers who need speed without sacrificing usability. Built in Rust with a Python API, it consistently outperforms pandas on CSV…

Read more →
Pandas

How to Read Excel Files in Pandas

Excel files remain stubbornly ubiquitous in data workflows. Whether you’re receiving sales reports from finance, customer data from marketing, or research datasets from academic partners, you’ll…

Read more →
Pandas

How to Read JSON Files in Pandas

JSON has become the lingua franca of web APIs and configuration files. It’s human-readable, flexible, and ubiquitous. But flexibility comes at a cost—JSON’s nested, hierarchical structure doesn’t map…

Read more →
Python

How to Read JSON Files in Polars

Polars has become the go-to DataFrame library for performance-conscious Python developers. While pandas remains ubiquitous, Polars consistently benchmarks 5-20x faster for most operations, and JSON…

Read more →