What you’ll learn
Big Data and Hadoop for Beginners – with Hands-on Course Site
- Understand different technology trends, salary trends, Big Data market and different job roles in Big Data
- Learn what is Hadoop is for, and how it works
- Understand the complex architectures of Hadoop and its component
- Hadoop installation on your machine
- High-quality documents
- Demos: Running HDFS commands, Hive queries, Pig queries
- Sample data sets and scripts (HDFS commands, Hive sample queries, Pig sample queries, Data Pipeline sample queries)
- Start writing your codes in Hive and Pig to process huge volumes of data
- Design your data pipeline using Pig and Hive
- Understand modern data architecture: Data Lake
- Practice with Big Data sets
- Basics knowledge of SQL and RDBMS would be a plus
- Machine- Mac or Linux/Unix or Windows
The main objective of this course is to help you understand Complex Architectures of Hadoop and its components, guide you in the right direction to start with, and quickly start working with Hadoop and its components.
It covers everything that you need as a Big Data Beginner. Learn about the Big Data market, different job roles, technology trends, history of Hadoop, HDFS, Hadoop Ecosystem, Hive, and Pig. In this course, we will see how as a beginner one should start with Hadoop. This course comes with a lot of hands-on examples that will help you learn Hadoop quickly.
The course has 6 sections, and focuses on the following topics:
Big Data at a Glance: Learn about Big Data and different job roles required in the Big Data market. Know big data salary trends around the globe. Learn about the hottest technologies and their trends in the market.
Getting Started with Hadoop: Understand Hadoop and its complex architecture. Learn Hadoop Ecosystem with simple examples. Know different versions of Hadoop (Hadoop 1.x vs Hadoop 2.x), different Hadoop Vendors in the market and Hadoop on Cloud. Understand how Hadoop uses the ELT approach. Learn to install Hadoop on your machine. We will see running HDFS commands from the command line to manage HDFS.
Getting Started with Hive: Understand what kind of problem Hive solves in Big Data. Learn its architectural design and working mechanism. Know data models in Hive, different file formats supported by Hive, Hive queries, etc. We will see running queries in Hive.
Getting Started with Pig: Understand how Pig solves problems in Big Data. Learn its architectural design and working mechanism. Understand how Pig Latin works in Pig. You will understand the differences between SQL and Pig Latin. Demos on running different queries in Pig.
Use Cases: Real-life applications of Hadoop are important to better understand Hadoop and its components, hence we will be learning by designing a sample Data Pipeline in Hadoop to process big data. Also, understand how companies are adopting modern data architecture i.e. Data Lake in their data infrastructure.
Practice: Practice with huge Data Sets. Learn Design and Optimization Techniques by designing Data Models, Data Pipelines by using real-life applications’ data sets.