Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Data Engineering with Hadoop
1: What is Big Data and Hadoop
What can you expect from this course? (2:09)
Introduction to Big Data (14:49)
What is Hadoop? Why Hadoop? (5:37)
Hadoop Architecture – Overview (2:38)
Hadoop Architecture – Key services (7:12)
Storage/Processing characteristics (7:50)
Store and process data in HDFS (3:55)
Handling failures - Part 1 (5:09)
Handling failures - Part 2 (7:32)
Rack Awareness (5:58)
Hadoop 1 v/s Hadoop 2 (12:50)
2: Hadoop Distributions and Setup
Hadoop Ecosystem (3:35)
Vanilla/HDP/CDH/Cloud distributions (10:11)
Install Cloudera Quickstart Docker (7:18)
Hands-on with Linux and Hadoop commands (5:48)
3: Data Warehousing with Apache Hive
Hive Overview (4:53)
How Hive works (5:56)
Hive query execution flow (4:58)
Creating a Data Warehouse & Loading data (5:09)
Creating a Hive Table (21:17)
Load data from local & HDFS (17:18)
Internal tables vs External tables (17:19)
Partitioning & Bucketing. (Cardinality concept) (16:23)
Static Partitioning - Lab (14:56)
Dynamic Partitioning - Lab (13:54)
Bucketting - Lab (22:31)
Storing Hive query output (11:33)
Hive SerDe (14:25)
ORC File Format (14:09)
4: Import/Export Data with Apache Sqoop
Sqoop overview (3:51)
Sqoop list-databases and list-tables (6:30)
Scoop Eval? (3:58)
Import RDBMS table with Sqoop (11:39)
Handling parallelism in Sqoop (9:01)
Import table without primary key (11:00)
Custom Query for Sqoop Import (8:47)
Incremental Sqoop Import - Append (9:51)
Incremental Sqoop Import - Last Modified (13:54)
Scoop Job (8:00)
Sqoop Import to a Hive table (10:58)
Sqoop Import all tables - Part 1 (6:19)
Sqoop Import all tables - Part 2 (14:02)
Sqoop Export (6:13)
Export Hive table (4:35)
Export with Staging table (6:23)
Custom Query for Sqoop Import
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock