Other BigData Tools andTechnologies | 179
Hadoop HDFS for analysis. In this way, a company can track the users’ main area about prod-
uct searching, for example, mobile in electronics, sports shoes and gym equipment in sports
area, etc.
Flume is used to move the log data generated by application servers into HDFS at a higher
speed.
7.4.4 Components of Flume
Event: Event is the single log entry or unit of data which we transport further.
Source: Source is the component by which data enters Flume workflows.
Sink: For transporting data to the desired destination, Sink is responsible.
Channel: Channel is nothing but a duct between the Sink and Source.
Agent: Agent is what we have known as any JVM that runs Flume.
Client: Client transmits the event to the source that operates with the agent.
7.4.5 Congure Flume to Ingest Web Log Data
from a Local Directory to HDFS
Apache web server logs are generally stored in les on local machines running the server. In this
exercise, we will push the web log les into a local spool directory and then use Flume to collect
the data.
What is Spool Directory: This source lets you insert data by placing les into a ‘spooling’ directory on
disk. This type of source will place the specied directory for new les, and will parse events out
of new les as they appear. After a given le has been fully read into the channel, it is renamed to
indicate completion (or optionally deleted) as ‘lename.completed’.
Both the local and HDFS directories must exist before using the spooling directory source.
Inthis case, we will be using spool directory as our source and HDFS as destination.
The path of the web server log in the local file system is /data/weblog/*.*
Creating an HDFS directory for Flume ingested data.
hadoop fs -mkdir/flume_weblog_sink
Create the spool directory inside the desktop, which will act as our web log simulator and
store data files for Flume to ingest. On the local file system, create ‘flume_weblog_sink’ direc-
tory using the below command.
sudomkdir/Desktop/flume_weblog_sink
Now configure Flume in ‘weblog.conf’ file as shown below.
Create a configuration file named as ‘weblog.conf’ to ingest by Flume.
M07 Big Data Simplified XXXX 01.indd 179 5/17/2019 2:50:12 PM
180 | Big Data Simplied
agent1.sources.source1_1.spoolDir is set with input path as in local file system path,
i.e.,in our case ‘/Desktop/flume_weblog_sink’.
agent1.sinks.hdfs-sink1_1.hdfs.path is set with output path as in HDFS path, i.e., in our
case ‘/flume_weblog_sink’.
Copy and place the ‘weblog.conf’ file inside Flume ‘conf’ directory.
Start Hadoop processes.
M07 Big Data Simplified XXXX 01.indd 180 5/17/2019 2:50:13 PM
Other BigData Tools andTechnologies | 181
After configuring ‘weblog.conf’ file inside flume ‘conf’ directory start the flume agent as
shown below.
$ flume-ng agent -n agent1 -f /usr/local/apache-flume-1.7.0-bin/conf/
weblog.conf
Flume agent has started completely without any error.
Now for ingestion test by Flume, create a sample text file in the desktop named as ‘weblog.
txt’ and put some sample data as follows.
M07 Big Data Simplified XXXX 01.indd 181 5/17/2019 2:50:14 PM
182 | Big Data Simplied
We have one file named ‘weblog.txt’ and one directory named ‘flume_weblog_sink’ in the
desktop now.
For runtime file ingestion by Flume, place the newly created sample file ‘weblog.txt’ inside
the directory ‘flume_weblog_sink’ in desktop.
The file will ingest by Flume instantly and the name of the file will be changed (weblog.txt.
COMPLETED) by Flume engine to mark that the file ingestion is completed.
M07 Big Data Simplified XXXX 01.indd 182 5/17/2019 2:50:15 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset