Big Data Fundamentals

This course is a survey of big data – the landscape, the technology behind it, business drivers and strategic possibilities. “Big data” is a hot buzzword, but most organizations are struggling to put it to practical use. Without assuming any prior knowledge of Apache Hadoop or big data management, this course teaches a wide range of professional roles how to tap and manage the potential benefits of big data, including: -Discovering customer insights buried in your existing data -Uncovering product opportunities from data insights -Pinpointing decision points and criteria -Scaling your existing workflows and operations -Learning to ask questions that drive tangible business value from Big Data tools

Retail Price: $1,295.00

Next Date: 07/11/2024

Course Days: 2


Enroll in Next Date

Request Custom Course


Who should attend

This class is for anyone involved in project, product, or IT work who is actively consuming or considering big data services. No specific technical experience or prerequisites are needed. 
•    Software Engineers and Team Leads
•    Project Managers
•    Business Analysts
•    DBAs and Data Engineering teams
•    Business Customers
•    System Analysts

 

Pre-Requisites

No specific technical experience or prerequisites are needed.

 

 


Outline

Part 1: Introduction to Big Data

  1. Academic
  2. Early web
  3. Web-scale
    • 1994 – 2012
    • 2016
    • 2020

Part 2: Sources (Examples)

  1. Internet
  2. Transport systems
  3. Medical, healthcare
  4. Insurance
  5. Military and others

Part 3: Hadoop – the free platform for working with big data

  1. History
  2. Yahoo
  3. Platform fragmentation
  4. What usage looks like in the enterprise

Part 4: The concepts

  1. Load data how you find it
  2. Process it when you can
  3. Project it into various schemas on the fly
  4. Push it back to where you need it

Part 5: The basics

  1. What it’s good for
  2. What can’t it do / disadvantages
  3. Most common use cases for big data

Part 6: Introduction to HDFS

  1. Robustness
  2. Data Replication
  3. Gotchas

Part 7: MapReduce – the core big data function

  1. Map explained
  2. Sort and shuffle explained
  3. Reduce explained

Demonstration: Hadoop, HDFS, and MapReduce - Let’s try it!

Part 8: YARN

  1. How it fits
  2. How it works
  3. Resource Manager
  4. Application Master

Part 9: PIG

  1. What it is
  2. How it works
  3. Compatibilities
  4. Advantages
  5. Disadvantages

Demonstration: YARN and PIG - Let’s try it!

Part 10: Processing Data

  1. The Piggy Bank
  2. Loading and Illustrating the data
  3. Writing a Query
  4. Storing the Result

Part 11: HIVE

  1. Data warehousing
  2. What it is, what it’s not
  3. Language compatibilities
  4. Advantages

Demonstration: HIVE - Let’s try it!

Example demo walkthrough: Contextual advertising

Part 12: OOZIE

  1. What it is
  2. Complex workflow environments
  3. Reducing time-to-market
  4. Frequency execution
  5. How it works with other big data tools

Example demo walkthrough: How to run a job

Part 13: FLUME – stream, collect, store and analyze high-volume log data

  1. How it works: Event, source, sink, channel, agent and client
  2. How it works illustrated
  3. How it works demonstrated

Part 14: SPARK

  1. Move over 2012 Big Data tools: Apache SPARK is the new power tool
  2. The new open source cluster framework
  3. When SPARK performs 100 times faster
  4. Performance comparison of Spark and Hadoop
  5. What else can it do?

Part 15: HBASE

  1. What it is
  2. Common use cases

Part 16: Using External Tools

 

Course Dates Course Times (EST) Delivery Mode GTR
7/11/2024 - 7/12/2024 10:00 AM - 6:00 PM Virtual Enroll
10/3/2024 - 10/4/2024 8:30 AM - 4:30 PM Virtual Enroll