Monday, May 2, 2016

Tutorial: How to Create a Router using CentOS 7

Hello and welcome to my amazing tutorial on how to configure a CentOS 7 machine into a router. This will be done in 12 EZ STEPS! Good luck.

Step 1: Add this line to your sysctl.conf file in /etc



Step 2: Find your external NIC (the one connecting you to the internet) and edit the configuration file for it, which will be located at:

/etc/sysconfig/network-scripts/[ethernet device name]

Step 3: Configure it like the image below, however, do not change the UUID or Device, as this is unique to your machine.





Step 4: Find your internal NIC (the one connecting you to the switch) and edit the configuration file for it, which will be located at the same location.

Step 5: Configure it to the image below, but do not change Name, Device, or UUID, since it's unique to you. The IPADDR is openly changeable to what you wish it to be, but I suggest leaving it like that. The HWADDR line, is also unique, and will not be the one in the image below. To find this, you must type "ip a" into the command line, then look for the MAC Address of the Internal NIC (it will look similar to the one in the image), then copy that specific address into the file on the HWADDR line.



Step 6: Install dnsmasq with the following command:

"yum install dnsmasq"

Step 7: Type the following commands (in order), to get things started:

"start dnsmasq.service"
"systemctl start dnsmasq.service"
"systemctl enable dnsmasq.service" <---- To get it enabled on boot

Step 8: Open port 53 using this command:

"Firewall-cmd --permanent --add-port=53/udp --zone=internal

Step 9: Reload the firewall like so:

"Firewall-cmd --reload"

Step 10: Quickly go onto one of your clients, and find the External NIC (which will be connected to the switch) and edit the configuration file at:

"/etc/sysconfig/network-scripts/[ethernet device name]"

Step 11: Add two new lines:

"DNS1= 8.8.8.8"
"DNS2= 8.8.4.4"

Step 12: Now you must configure DHCP. Do this:



Shoutout to: Copyright 2007-2014 Server World - All Rights Reserved
Marco Sirabella, and Jonathan from the famous blog:
jonlinuxdiz.blogspot.com

Thanks bros, and have a great day.

Copyright 1969-2069 Martin Is Awesome - No Rights Reserved

Wednesday, April 20, 2016

Week 23: Day 060 - Managing Logical Volumes


This blog post will be dedicated to logical volumes. The main reason for using this is if for example, you ran out of space on your logical volume, you take available disk space from the volume group. If there's no space on the volume group, get a new physical volume. In essence this is like if you lost space on your hard drive, you take space from something else, and if there is none, buy and use a new hard drive.

In a LVM architecture, there are several layers. They can range from disks to partitions, logical units, and yadda yadda. The storage devices can be flagged as physical volumes, which makes them usable in LVM. Here's the hierarchy: Physical Disks, Volume Group, Logical Volumes, and File Systems. This will work on both EXT4 and XFS. Data can be moved to the volume group using "pvmove". If a hard disk is failing, it can be removed from the volume group.

To create logical volumes you need to take care of physical volume (PV) and the volume group (VG), while assigning physical volumes to it. Finally the logical volume (LV) itself will have to be created. The only commands I need to remember are: "pv", "vg", and "lv".

------------------------------------------------------------------------------------------------------------
One of the great benefits of LVM is that you can resize it. The "vgextend" command is used to add storage to a volume group, and the "vgreduce" command is used to take physical volumes out of a volume group (which can lead to some additional complications). For the RHCSA test, you need to know how to extend the available storage in volume groups. This procedure is relatively easy:

1. Make sure that a physical volume or device is available to be added to the volume group.

2. Use vgextend to extend the volume group. The new disk space will show immediately in the volume group.

Logical volumes can be extended with the "lvextend" command.To grow the logical volume size, use lvresize, followed by the -r option to resize the file system used on it. 

Perfect example of its use:
  • lvresize -L +1G -r /dev/vgdata/lvdata.
  •  lvresize -r -l 75%VG /dev/vgdata/lvdata This resizes the logical volume so that it will take 75% of the total disk space in the volume group.
  • lvresize -r -l +75%VG /dev/vgdata/lvdata This tries to add 75% of the total size of the volume group to the logical volume. (Notice the difference with the previous command.)
  •  lvresize -r -l +75%FREE /dev/vgdata/lvdata This adds 75% of all free disk space to the logical volume.
  • lvresize -r -l 75%FREE /dev/vgdata/lvdata This resizes the logical volume to a total size that equals 75% of the amount of free disk space. (Notice the difference with the previous command.)

Week 23: Day 059 - Managing Partitions


Hello, today I'm going to talk about managing MBR and GPT partitions, and touch on mounting file systems. MBR stands for " Master Boot Record", and GPT stands for "GUID Partition Table". This one might be a little long, so let's get right into it.

Basically, MBR is ancient and it was made in the 80s, so soon it will be deprecated. At first it was defined with a size of 512 bytes on the hard drive. Let's talk about GPT now. This is exponentially better, this is why. The upsides to using GPT not only include having up to 120 partitions, but also having up to 8 zetabytes of space available on each one, the 2 TiB limit no longer exists, and because space that is available to store partitions is much bigger than 64 bytes, which was used in MBR, there is no longer a need to distinguish between primary, extended, and logical partitions. GPT uses a 128-bit global unique ID (GUID) to identify partitions, and backup copy of the GUID partition table is created by default at the end of the disk.

------------------------------------------------------------------------------------------------------------

Get ready for some exercises. Here I will be responding to the exercise underneath the steps, as usual. Basically, this will cover the creation of MBR and GPT partitions. A few quick notes however, fdisk is the command associated with MBR, while gdisk is associated with GPT.

This exercise has been written to use an installation of RHEL/CentOS that contains nonpartitioned disk space. If you do not have such an installation, you can use a second disk device on your demo environment. This can be a virtual disk that is added through your virtualization program, or a USB flash drive if working on a physical installation. In that case, make sure the device names in this exercise are replaced with the device names that match your hardware.

------------------------------------------------------------------------------------------------------------

Quick Note: Swap is RAM (fake according to Marco) which is derived from the hard drive.

------------------------------------------------------------------------------------------------------------

I won't even get into mounting at this point, I've learned a lot about it ever since that var issue that Marco and I had. Thanks for reading, and goodbye.

Sunday, April 3, 2016

Week 22: Day 058 - Configuring Logging


Hello, it's been quite a while since my last blog post. We had a one week spring break, which was much needed in my case. This week, we're gonna talk about logging and partitions. You ready? Let's get riiiiiight into the blog post.

First, we got a few examples of ways you can write log information. There's Direct Write, rsyslogd, and journald. In terms of journald, it provides an advanced log management system. It collects and store information regarding booting, kernel info, and service. To query/check this, you type "journalctl". When checking the logs for this in /var/log/messages, you'll find that it displays the date and timestamp along with the military time and the host that it came from. It will more importantly show you the service/process name, and the contents of the message. To look at this live do "tail -f <logfile>" and it will show you the live bits of what's going on with the log file (ctrl-c will close this obviously). Here's an exercise show you:

1. Open a root shell.

Obviously.

2. From the root shell, type tail -f /var/log/messages.

This shows you the last ten lines of the the file "messages", which contain just that.

3. Open a second terminal window. In this terminal window, type su - user to open a subshell as user.

This opened another shell under my username.

4. Type su - to open a root shell, but enter the wrong password.

I usually use sudo su. When entering the wrong password nothing will be logged.

5. Notice that nothing appears in /var/log/messages. That is because login-related errors are not written here.

6. From the user shell, type logger hello. You’ll see the message appearing in the /var/log/messages file in real time.

Interesting shortcut.

7. In the tail -f terminal, use Ctrl+C to stop tracing the messages file.

8. Type tail -n 20 /var/log/secure. This shows the last 20 lines in /var/log/secure, which also shows the messages that the su - password errors have generated previously.

This is where it's stored instead of the messages file, it's in the "secure" file.

------------------------------------------------------------------------------------------------------------

Let's move on to rsyslogd. This is similar to journald. It uses facilities, priorities and destinations:
  • A facility specifies a category of information that is logged. Rsyslogd uses a fixed list of facilities, which cannot be extended. This is because of backward compatibility with the legacy syslog service.
  •  A priority is used to define the severity of the message that needs to be logged. When specifying a priority, by default all messages with that priority and all higher priorities are logged.
  • A destination defines where the message should be written to. Typical destinations are files, but rsyslog modules can be used as a destination as well, to allow further processing through an rsyslogd module.
When specifying a destination, it's often a file, and the command must start hyphenated (Ex: -/var/log/maillog). Devices can also be used. The different severity levels are: warn, err/error, crit, alert, emerg/panic. The facilities are different categories of info like cron, dameon, or mail.

In this exercise, you learn how to change rsyslog.conf. You configure the Apache service to log messages through syslog, and you create a rule that logs debug messages to a specific file.

1. By default, the Apache service does not log through rsyslog, but keeps its own logging. You are going to change that. To start, type yum install -y httpd to install the Apache service.

Apache is widely used for Linux web-servers.

2. After installing the Apache service, open its configuration file /etc/http/conf/httpd.conf and add the following line to it:

ErrorLog syslog:local1

3. Type systemctl restart httpd.

4. Now create a line in the rsyslog.conf file that will send all messages that it receives for facility local1 (which is now used by the httpd service) to the file /var/log/httpd-error.log. To do this, include the following line:

5. Tell rsyslogd to reload its configuration, by using systemctl restart httpd.

6. All Apache error messages will now be written to the httpd-error.log file.

7. From the Firefox browser, go to http://localhost/nowhere. Because the page you are trying to access does not exist, this will be logged to the Apache error log.

8. Now let’s create a snap-in file that logs debug messages to a specific file as well. To do this, type echo “*.debug /var/log/messages/messages-debug” > /etc/rsyslogd/debug.conf.

9. Again, restart rsyslogd using systemctl restart rsyslogd.

10. Use the command tail -f /var/log/messages-debug to open a trace on the newly created file.

11. Type logger -p daemon.debug “Daemon Debug Message”. You’ll see the debug message passing by.

12. Use Ctrl+C to close the debug log file.

Moving on to Rotating Log files. To prevent syslog messages from filling up your system completely, the log messages can be rotated. So, if /var/log/messages is rotated on January 17, 2015, the rotated filename will be /var/log/messages-20150115. The default settings for log rotation are kept in the file /etc/logrotate.conf where you can customize the log rotation settings.

--------------------------------------------------------------------------------------------------------------------------

Finally, let's talk about journald. This is an alternative to rysyslog. If you want to see the last messages that have been logged, you can use journalctl -f, which shows the last lines of the messages where new log lines are automatically added. You can also type journalctl and use (uppercase) G to go to the end of the journal.

In this exercise, you learn how to work with different journalctl options.

1. Type journalctl. You’ll see the content of the journal since your server last started, starting at the beginning of the journal. The content is shown in less, so you can use common lesscommands to walk through the file.

2. Type q to quit the pager. Now type journalctl --no-pager. This shows the contents of the journal without using a pager.

3. Type journalctl -f. This opens the live view mode of journalctl, which allows you to see new messages scrolling by in real time. Use Ctrl+C to interrupt.

4. Type journalctl and press the Tab key twice. This shows specific options that can be used for filtering. Type, for instance, journalctl _UID=0.

5. Type journalctl -n 20. The -n 20 option displays the last 20 lines of the journal (just like tail -n 20).

6. Now type journalctl -p err. This command shows errors only.

7. If you want to view journal messages that have been written in a specific time period, you can use the --since and --until commands. Both options take the time parameter in the format YYYY-MM-DD hh:mm:ss. Also, you can use yesterday, today, and tomorrow as parameters. So, type journalctl --since yesterday to show all messages that have been written since yesterday.

8. journalctl allows you to combine different options, as well. So, if you want to show all messages with a priority err that have been written since yesterday, use journalctl --since yesterday -p err.

9. If you need as much detail as possible, use journalctl -o verbose. This shows different options that are used when writing to the journal (see Listing 13.3). All these options can be used to tell the journalctl command which specific information you are looking for. Type, for instance, journalctl _SYSTEMD_UNIT=sshd.service to show more information about the sshd systemd unit.


The command "journalctl -o verbose" is important since it shows all the important information. By default, the journal is stored in the file /run/log/journal. The entire /run directory is used for current process status information only, which means that the journal is cleared when the system reboots. Even when the journal is written to the permanent file in/var/log/journal, that does not mean that the journal is kept forever. Here's how to make the journal permanent:

1. Open a root shell and type mkdir /var/log/journal.

2. Before journald can write the journal to this directory, you have to set ownership. Type chown root:systemd-journal /var/log/journal, followed by chmod 2755 /var/log/journal.

3. Next, you can either reboot your system (restarting the systemd-journald service is not enough) or use the killall -USR1 systemd-journald command.

4. The systemd journal is now persistent across reboots. If you want to see the log messages since last reboot, use journalctl -b.

--------------------------------------------------------------------------------------------------------------------------

Review Questions

1. Which file is used to configure rsyslogd?

/etc/rsyslog.conf

2. Which configuration file contains messages related to authentication?

/var/log/secure

3. If you do not configure anything, how long will it take for log files to be rotated away?

Around 1 month, 5 weeks to be exact

4. Which command enables you to log a message from the command line to the user facility, using the notice priority?

logger -p user.notice "(enter text here)"

5. Which line would you add to write all messages with a priority of info to the file /var/log/messages.info?

"*.=info /var/log/messages.info" in /etc/rsyslog.conf

6. Which configuration file enables you to allow the journal to grow beyond its default size restrictions?

/etc/systemd/journal.d.conf

7. Which command enables you to see new messages in the journal scrolling by in real time?

journalctl -f

8. Which command enables you to see all journald messages that have been written for PID 1 between 9:00 a.m. and 3:00 p.m.?

journalctl _PID=1 --since 9:00:00 --until 15:00:00

9. Which command enables you to see journald messages since last reboot on a system where a persistent journal has been configured?

journalctl -b

10. Which procedure enables you to make the journald journal persistent?

killella -USR1 systemd-journald

Wednesday, March 16, 2016

Week 22: Day 057 - Scheduling Tasks


Today we will be learning about timing tasks, with tools mainly such as cron. I've been aware of that tool for a while, and it's very useful for timing actions on your machine, many times being something along the lines of shutting down your computer at a scheduled time.

What is cron? Cron is a service which runs processes at specific times. Some system tasks are already using cron, probably without your awareness, such as "logrotate" which automatically gets rid of old logs at a certain time. To see what's going on with cron right now at this second, type "systemctl status crond -l". The most important part is the first part, because it shows that it's loaded. These are some examples of how you schedule cron properly:
  •  * 11 * * * Any minute between 11:00 and 11:59 (probably not what you want)
  • 0 11 * * 1-5 Every day at 11 a.m. on weekdays only
  • 0 7-18 * * 1-5 Every hour on weekdays on the hour
  • 0 */2 2 12 5 Every 2 hours on the hour on December second and every Friday in December
Instead of editing /etc/crontab you can create a configuration file for it seperately, and this will be put into /etc/cron.d then put the scripts for those cron commands into /etc/cron.hourly or cron.daily or cron.monthly, you get the point! User specific files are created with crontab -e (username). This is extremely useful if you don't want to do the job for all users. Then there's anacron which is really a configuration file for cron and where these script files are put. If you want to configure this file go to /etc/ancrontab it's a file not a directory. To restrict who can use cron make a file in /etc/cron.allow and add users on there to whitelist them so they may use it. Then make a similar one in /etc/cron.deny which will deny cron privileges to whomever is attempting use it. Furthermore here's how! In this exercise, you apply some of the cron basics. You schedule cron jobs using different mechanisms:

1. Open a root shell. Type cat /etc/crontab to get an impression of the contents of the /etc/crontab configuration file.

From what I see using cat, this file has got a neat layout showing you how to determine times and dates.

2. Type crontab -e. This opens an editor interface that by default uses vi as its editor. Add the following line:
0 2 * * 1-5 logger message from root

By default this uses "vi", and it opened up crontab automatically.

3. Use the vi command :wq! to close the editing session and write changes.

Same save command to exit as vim.

4. Use cd /etc/cron.hourly. In this directory, create a script file with the name eachhour that contains the following line:
logger This message is written at $(date)

The file does not need an extension as long as it's executable, but as a norm use .sh

5. Use chmod +x eachhour to make the script executable; if you fail to make it executable, it will not work.

This makes it executable automatically, very important to note, instead of manually doing it.

6. Now enter the directory /etc/cron.d and in this directory create a file with the name eachhour. Put the following contents in the file:
11 * * * * root logger This message is written from /etc/cron.d

7. Save the modifications to the configuration file and go work on the next section. (For optimal effect, perform the last part of this exercise after a couple of hours.)

Hopefully it will work haha.

8. After a couple of hours, type grep written /var/log/messages and read the messages that have been written which verifies correct cron operations.

Can't really verify this, but it's probably true.
------------------------------------------------------------------------------------------------------------

Finally, let's talk about scheduling jobs with atd. This is another method of timing processes on your system. In this exercise, you learn how to schedule jobs using the atd service:

1. Type systemctl status atd. In the line that starts with Loaded:, this command should show you that the service is currently loaded and enabled, which means that it is ready to start receiving jobs.

It's enabled and displays that the job spooling tools are active and working properly.

2. Type at 15:00 (or replace with any time near to the time at which you are working on this exercise).

It changed the $ to at> which is also done by python. I probably need to type in a special command.

3. Type logger message from at. Use Ctrl+D to close the at shell.

This created a new job.

4. Type atq to verify that the job has indeed been scheduled.

This indeed scheduled a job. I believe that this tool atd is a really useful tool to shortcut editing and creating script files for cron, and simply typing in commands to atd.
------------------------------------------------------------------------------------------------------------

Review Questions

1. Where do you configure a cron job that needs to be executed once every 2 weeks?

You configure it at /etc/cron.weekly

2. How do you specify the execution time in a cron job that needs to be executed twice every month, on the 1st and the 15th of the month at 2 p.m.?

0 13 1,15 * *

3. How do you specify cron execution time for a job that needs to run every 2 minutes on every day?

*/2

4. How do you specify a job that needs to be executed on September 19 and every Thursday in September?

70 -1 19 sep 4*

5. Which three valid day indicators can you use to specify that a cron job needs to be executed on Sunday?

0, 7, or sun

6. Which command enables you to schedule a cron job for user lisa?

"crontab -e" then type "(job scheduling) lisa /cron.d"

7. How do you specify that user boris is never allowed to schedule jobs through cron?

echo boris >> /etc/cron.deny

8. You need to make sure that a job is executed every day, even if the server at execution time is temporarily unavailable. How do you do this?

Trick question

9. Which service must be running to schedule at jobs?

cron.d

10. Which command enables you to find out whether any current at jobs are scheduled for execution?

atq

Wednesday, March 9, 2016

Week 21: Day 055 - Managing Software


Hello, this week I'm focusing on yum and rpm. In other words, this is about downloads and repositories. Although I already have a good understanding of it's basic uses, I will delve into more specific things.

First of all, "Red-Hat Package Manager" is a way to archive packages and provide its metadata. This program which comes with Red Hat, is immensely important when dealing with repos. Repositories should be kept up to date as it's important for installations. In the past I have made several repos, in my successful attempt to install Google Chrome and Spotify. To tell the server which repo to use, make the extensions of your repository files ".repo".

In this exercise, you learn how to create your own repository. To perform this exercise, you need to have access to the CentOS installation disk or ISO file.

1. Insert the installation disk in your virtual machine. This mounts it on the directory /run/media/user/CentOS 7 x86_64. Alternatively, you can manually mount the ISO on the /mnt directory, using mount -o loop /path/to/centos.iso /mnt.

I don't really need to do this part.

2. Type mkdir /repo to create a directory /repo that can be used as repository.

3. If you want to create a complete repository, containing all the required files, type cp $MOUNTPATH/Packages/* repo. (Replace $MOUNTPATH with the name of the directory on which the installation disk is mounted.) If you do not need a complete repository, you can copy just a few files from the installation disk to the /repo directory.

4. Type yum install -y createrepo to ensure that the createrepo RPM package is installed.

OR you can just make a new file with the extension ".repo"

5. Type createrepo /repo. This generates the repository metadata, which allows you to use your own repository.

If you do it my way, you open the repo file you made.

6. Now that you have created your own repository, you might as well start using it. In the /etc/yum.repos.d directory, create a file with the name my.repo. Make sure this file has the following contents:
[myrepo]
name=myrepo
baseurl=file:///repo

Then type it into this file. That's all you need, then you're done!

7. Type yum repolist to verify the availability of the newly created repository. It should show the name of the myrepo repository, including the number of packages that is offered through this repository

Done.
------------------------------------------------------------------------------------------------------------

Second of all, let's talk about yum! Even though it may be deprecated some day by dnf, right now it's important for us to use this instead, since it will be on the test. yum works with repositories, which is why RPM is so important, and why they go hand in hand.

Here are all the important yum commands:

- yum install (name of file)
- yum search (name of file)
- yum update (name of file)
- yum history
- yum list
- yum provides

That's pretty much it haha. Thanks for reading.


Review Questions

1. You have a directory containing a collection of RPM packages and want to make that directory a repository. Which command enables you to do that?

createrepo

2. What needs to be in the repository file to point to a repository on http://server.example.com/repo?

[xxxx]
name=xxxxx
baseurl=http://server.example.com/repo?

3. You have just configured a new repository to be used on your RHEL computer. Which command enables you to verify that the repository is indeed available?

yum repolist

4. Which command enables you to search the RPM package containing the file useradd?

5. Which two commands do you need to use to show the name of the yum group that contains security tools and shows what is in that group?

6. Which command enables you to install an RPM that you have downloaded from the Internet and which is not in the repositories?

7. You want to make sure that an RPM package that you have downloaded does not contain any dangerous script code. Which command enables you to do so?

8. Which command reveals all documentation in an RPM?

9. Which command shows the RPM a file comes from?

10. Which command enables you to query software from the repository?

Monday, March 7, 2016

Week 21: Day 054 - Process Management


Everything that happens on a Linux server requires the creation of processes. This chapter will cover specifics on what these processes do. When a process begins, it uses multiple threads, and a thread is a bunch of sub-processes happening at the same time.

To immediately start a job in the background, prefix the command by starting it with the "&" symbol. To return it to the foreground do use the "fg" command. To terminate a process use the "kill" command.


In this exercise, you apply the commands that you just learned about to manage jobs that have been started from the current shell.

1. Open a root shell and type the following commands:

sleep 3600 &
dd if=/dev/zero of=/dev/null &
sleep 7200

2. Because you started the last command with no & after the command, you have to wait 2 hours before you get control to the shell back. Type Ctrl+Z to stop it.

I can no longer access the shell, ctrl-z will stop it.

3. Type jobs. You will see the three jobs that you just started. The first two of them have the Running state, and the last job currently is in the Stopped state.

There are two running jobs, and the one that I stopped.

4. Type bg 3 to continue running job 3 in the background. Notice that because it was started as the last job, you did not really have to add the number 3.

It shows three running jobs.

5. Type fg 1 to move job 1 to the foreground.

This moves "sleep 2500" to the foreground.

6. Type Ctrl+C to cancel job number 1 and use jobs to confirm that it is now gone.

I ended that job, and it no longer exists.

7. Use the same approach to cancel jobs 2 and 3 also.

They're all dead.

8. Open a second terminal on your server.

9. From that second terminal, type dd if=/dev/zero of=/dev/null &.

Don't know what this did.

10. Type exit to close the second terminal.

11. From the other terminal, start top. You will see that the dd job is still running. From top, use k to kill the dd job.

It asked me to type the number of the running job, and I chose the one that said "dd", now it's dead.

You cannot manage a single thread, however you can manage processes. When managing processes, it's easy to identify kernel processes because it's in "[ ]" brackets. Use the "ps aux | head" command to test take a look at an example of kernel processes. Now, "ps" retrieves running processes information, there are several modifiers for it. "aux" will show you a short summary of these processes. To look for the exact command use to start a given process, type "ps -ef". To see hierarchical relationship between parent and child processes types "ps fax". Note: hyphens are optional.


In this exercise, you learn how to work with ps, nice, kill, and related utilities to manage processes.

1. Open a root shell. From this shell, type dd if=/dev/zero of=/dev/null &. Repeat this command three times.

Created 4 different jobs of this.

2. Type ps aux | grep dd. This shows all lines of output that have the letters dd in them; you will see more than just the dd processes, but that should not really matter. The processes you just started are listed last.

This searches for dd in all of the.

3. Use the PID of one of the dd processes to adjust the niceness, using renice -n 5 <PID>. Notice that in top you cannot easily get an overview of processes and their current priority.

I got an error.

4. Type ps fax | grep -B5 dd. The -B5 option shows the matching lines, including the five lines before that. Becauseps fax shows hierarchical relationships between processes, you should also find the shell and its PID from which all the dd processes were started.

5. Find the PID of the shell from which the dd processes were started and type kill -9 <PID>, replacing <PID> with the PID of the shell you just found. You will see that your root shell is closed, and with it, all of the dd processes. Killing a parent process is an easy and convenient way to kill all of its child processes also.


Review Questions

1. Which command gives an overview of all current shell jobs?

jobs

2. How do you stop the current shell job to continue running it in the background?

Ctrl-Z then bg

3. Which keystroke combination can you use to cancel the current shell job?

Ctrl-C

4. A user is asking you to cancel one of the jobs he has started. You cannot access the shell that user currently is working from. What can you do to cancel his job anyway?

ps aux and kill <PID>

5. Which command would you use to show parent-child relationships between processes?
ps fax

6. Which command enables you to change the priority of PID 1234 to a higher priority?

ps -nn p 1234

7. On your system, 20 dd processes are currently running. What is the easiest way to stop all of them?

killall dd

8. Which command enables you to stop the command with the name mycommand?

pkill mycommand

9. Which command do you use from top to kill a process?

k

10. How would you start a command with a reasonably high priority without risking that no more resources are available for other processes?

nice -5

Friday, March 4, 2016

Week 20: Day 053 - Configuring Networking


Hello, today I'm covering network configuration. Since a lot of this stuff overlaps with what I have learned so far with Network+, I will skip some stuff about IP addresses, and go straight to the important stuff.

Quick recap though, IPv4 addresses are what is widely used now, but since there is a shortage of these IPs many are switch to IPv6 addressing to cope with this important issue. Difference between them, IPv4 is 32-bit while IPv6 is 128-bit. DHCP is "Dynamic Host Configuration Protocol", and that distributes IPs in your network on its own, meaning no necessity for static addressing. Network cards in Linux will usually have names like "eth0" or "eth1", and it's ordered based on detection order. Ethernet interfaces begin "en", WLAN interfaces begin with "wl", and WWAN interaces with "ww". The next part of the name represents the adapter "o" is onboard, "s" is hotplug spot, "p" is PCI location, and "x" creates a device name. Finally, the numbers end at a number representing an index, ID, or port.
------------------------------------------------------------------------------------------------------------
Example: eno16777734
------------------------------------------------------------------------------------------------------------

To validate your Network Configuration, there are several commands that can be used with the IP utlity:

- "ip addr"
- "ip route"
- "ip link"

More scientifically in terms of ip addr, to see the current network configuration type, "ip addr show", or "ip a". This will show you the current state, the mac address configuration, and the IPv4 or IPv6 configuration. To see the link state type "ip link show". To validate your routing "ip route show"; you can probably see the trend here, if you want to see it, add the modifier "show".

On another note the command "netstat" will be deprecated, and has been on some Linux distros. The new command for this is "ss" or "ss -lt".

------------------------------------------------------------------------------------------------------------
Exercise: 

1. Open a root shell to your server and type ip addr show. This shows the current network configuration. Note the IPv4 address that is used. Notice the network device names that are used; you need these later in this exercise.

Very messy, but yes.

2. Type ip route show to verify routing configuration.

This shows all the information regarding routes created through your system in relation to the network.

3. If your computer is connected to the Internet, you can now use the ping command to verify the connection to the Internet is working properly. Type ping -c 4 8.8.8.8, for instance, to send four packets to IP address 8.8.8.8. If your Internet connection is up and running, you should get “echo reply” answers.

This will ping the specified DNS server. Fun fact: 8.8.8.8 are Google's servers.

4. Type ip addr add 10.0.0.10/24 dev <yourdevicename>.

This will basically change your IP.

5. Type ip addr show. You’ll see the newly set IP address, in addition to the IP address that was already in use.

Static addressing...

6. Type ifconfig. Notice that you do not see the newly set IP address (and there are no options with the ifconfigcommand that allow you to see it). This is one example why you should not use the ifconfig command anymore.

ifconfig may be deprecated by this soon enough. Sounds ridiculous, but it's true!

7. Type ss -tul. You’ll now see a list of all UDP and TCP ports that are listening on your server.

Very useful command. This will indeed show all ports listening form your system.
------------------------------------------------------------------------------------------------------------

Next up was nmtui and nmcli. Since I've already worked with nmtui, I skimmed over some of the stuff I already knew, but I'm still gonna blog about this. In the case of nmcli, I have not done much with that yet, but nmtui is enough since people like Graham have already posted stuff about nmcli.

Anyways, nmtui is an interesting tool which I've used with my virtual machines. It is used for routing and accepting and distributing DHCP. Here are the important notes:

The nmtui interface consists of three menu options:

- Edit a Connection: Use this option to create new connections or edit existing connections.

- Activate a Connection: Use this to (re)activate a connection.

- Set System Hostname: Use this to set the hostname of your computer.

The option to edit a connection offers almost all features that you might ever need to do while working on network connections. It allows you to do anything you need to be doing on the RHCSA exam. You can use it to add any type of connection. Not just Ethernet connections, but also advanced connection types such as network bridges and teamed network drivers are supported.

When you select the option Edit Connection, you get access to a rich interface that allows you to edit most properties of network connections. After editing the connection, you need to deactivate it and activate it again. This should work automatically, but the fact is it does not. This wraps up my post! Thanks for reading.

Quick Note: "hostname.ctl" is extremely useful, shows you important data about your machine.

Tuesday, March 1, 2016

Week 20: Day 052 - Managing Users and Groups


Hello, today I will give you a summary of what I learned about managing users and groups. Most of this was done through my own doing on terminal, with some help from the textbook. Let's see what I know:

#1 - Local User Accounts

To create a local user account you need to sudo or get into root by doing "sudo su".

To make a new user type "useradd (name of user)".

To identify which user you are type "whoami".

To edit the sudoers file type "visudo".

To get into the configuration files do "vipw"

To change the password do "passwd (name of user)", usually in sudo.

To remove users do "userdel -r (name of user)".
------------------------------------------------------------------------------------------------------------
Ideal Example: "passwd -n 30 -w 3 -x 90 linda", sets the password for user linda to a minimal usage period of 30 days and an expiry after 90 days, where a warning is generated 3 days before expiry.------------------------------------------------------------------------------------------------------------

#2 - Local Groups

To create a local group you must use the command "vigr" or "groupadd".

To customize the name of the group when creating it "groupadd -g"

To make a user a part of an administrative group do "usermod -aG wheel user".

To make sure that a user is in a certain group type "id (name of user)".

To modify properties of the group "groupmod".
------------------------------------------------------------------------------------------------------------
Ideal Example: Type "groupadd sales" followed by "groupadd account" to add groups with the names sales and account. Then "usermod -aG sales linda" to add Linda to that group.
------------------------------------------------------------------------------------------------------------

#3 - Lightweight Directory Access Protocol (LDAP)

This is hierarchical and organized like DNS.

To configure this to CentOS/RHEL 7, there are several options:

- "authconfig" will let you configure through command line.
- "authconfig-tui" will give let you configure with a Text User Interface.
- "authconfig-gtk" will give you a GUI utility to configure it.

To connect to an LDAP server you must:

- Setup a hostname resolution on your server.
- The IP "192.168.122.200" is used for LDAP

You will be given this in the Text User Interface version, type this stuff in:











Click "OK" and you're done! Finally:

Review Questions

1. What is the UID of user root?

0

2. What is the configuration file in which sudo is defined?

/etc/sudoers

3. Which command should you use to modify a sudo configuration?

visudo

4. Which two files can be used to define settings that will be used when creating users?
/etc/login.defs

5. How many groups can you create in /etc/passwd?

None

6. If you want to grant a user access to all admin commands through sudo, which group should you make that user a member of?

wheel

7. Which command should you use to modify the /etc/group file manually?

vigr

8. Which two commands can you use to change user password information?

passwd chage

9. What is the name of the file where user passwords are stored?

/etc/shadow

10. What is the name of the file where group accounts are stored?

/etc/group

Monday, February 29, 2016

Week 20: Day 051 - Working With Text Files


Hello, today I will be continuing my blog posts, after a long snowstorm which kept us out of school for two weeks. In this chapter, I will take the exercises put it on here, and then comment on what I do. In the end, I will answer the review questions.


In this exercise, you apply some basic less skills working with file contents and command output.

1. From a terminal, type less /etc/passwd. This opens the /etc/passwd file in the less pager.


When I typed this command, it showed up with the accounts of everyone (since I'm in rackspace).

2. Type G to go to the last line in the file.


This goes to the last line, but I already happened to be there.

3. Type /root to look for the text root. You’ll see that all occurrences of the textroot are highlighted.


This will search for all words that say "root", and will highlight them.

4. Type q to quit less.

Just like vim.

5. Type ps aux | less. This sends the output of the ps aux command (which shows a listing of all processes) to less. Browse through the list.


This is like task manager, except not real time.

6. Press q to quit less.


In this exercise, you learn how to use head and tail to get exactly what you want.

1. Type tail -f /var/log/messages. You’ll see the last lines of /var/log/messages being displayed. The file doesn’t close automatically.


It doesn't, how do we do it?

2. Type Ctrl+C to quit the previous command.

This will exit you out of the pager.

3. Type head -n 5 /etc/passwd to show the first five lines in /etc/passwd.

This shows the first five lines of the file "passwd".

4. Type tail -n 2 /etc/passwd to show the last two lines of /etc/passwd.

This shows the last two lines, rather than the first.

5. Type head -n 5 /etc/passwd | tail -n 1 to show only line number 5 of the /etc/passwd file.

This focuses on one line, line 5, rather than showing the first five lines.


In this exercise, you work through some common grep options.

1. Type grep ‘^#’ /etc/sysconfig/sshd. This shows that the file /etc/sysconfig/sshd contains a number of lines that start with the comment sign #.



2. To view the configuration lines that really matter, type grep -v ‘^#’ /etc/sysconfig/sshd. This shows only lines that do not start with a #.




3. Now type grep -v ‘^#’ /etc/sysconfig/sshd -B 5. This shows lines that are not starting with a # sign but also the five lines that are directly before that line, which is useful because in these lines you’ll typically find comments on how to use the specific parameters. However, you’ll also see that many blank lines are displayed.



4. Type grep -v -e ‘^#’ -e ‘^$’ /etc/sysconfig/sshd. This excludes all blank lines and lines that start with #.



Review Questions


1. Which command enables you to see the results of the ps aux command in a way that you can easily browse up and down in the results?

ps aux | less

2. Which command enables you to show the last five lines from ~/samplefile?

tail -n 5 ~/samplefile

3. Which command do you use if you want to know how many words are in ~/samplefile?

wc ~/samplefile

4. After opening command output using tail -f ~/mylogfile, how do you stop showing output?

Ctrl-C

5. Which grep option do you use to exclude all lines that are starting with either a # or a ;?

grep -v -e ‘^#’ -e ‘^;’ filename


6. Which regular expression do you use to match one or more of the preceding characters?

?

7. Which grep command enables you to see text as well as TEXT in a file?

grep -i text file

8. Which grep command enables you to show all lines starting with PATH, as well as the five lines just before that line?

grep -A5 ‘PATH’ filename

9. Which sed command do you use to show line 9 from ~/samplefile?

sed -n 9p ~/samplefile

10. Which command enables you to replace the word user with the word users in ~/samplefile?

sed -i ‘s/user/users/g’ ~/samplefile

Monday, February 8, 2016

Week 19: Day 051 - TAR + GZip Compression

Hello folks, and welcome to my short blog on TAR. Basically, what I was asked to do, was to create a blog entry which built on something from the chapters we have been reading. I was given the task of going in-depth on TAR.

To begin, "Tape Archiver" (TAR) is used to archive files. There are three tasks important to the RHCSA exam, when it comes to knowing how to archive. You should know how to:

- Create an archive
- List the contents of an archive
- Extract an archive

To create an archive using TAR, you want to use the command:

tar cf(v if you want to see what's happening) archivename.tar /files-you-want-to-archive

Example: (must be root) tar cvf /root/homes.tar /home

To add a file to an archive you would use the r modifier.

Example: tar rvf /root/homes.tar /etc/hosts

To update it, use the u modifier.

To extract the archive, use the x modifier.

To see the contents of an archive type use the t modifier.

Now, you're probably all wonder, "when do we get to the compression". Well interestingly enough, back then this wasn't really used for compression. There was an add-on to the program called gzip, and it got so popular, that now it's implemented in TAR by default. Nowadays, compression is all it's used for. Here's how to do it:

When you're creating your archive, add the modifier -z and it will compress it when archiving your files. However, if you have already archived your files and you want to compress it, a command like this would work:

gzip (name of file).tar

That's all there is to it! Thanks for reading, and I hope this helped.


Sunday, February 7, 2016

Week 18: Day 050 - Securing TCP/IP


Hello again, get ready for a whole new chapter coming your way! Today we're gonna talk about Securing TCP/IP with encryption and stuff. Emphasis on stuff. I would usually say "let's get right in", but I've said that too many times. So, let's begin!

So what is Encryption? Well it's scrambling data so badly, that even an evil genius jerkface who wants to steal it can't read it. How do they do it? Nonrepudiation, which means that the data is verified to be what it was when it was first sent. Now if some guy decides to access the data, then he would have to go through Authentication. It also verifies, but this time, the actual guy accessing it, rather than the data sent. Authorization is basically what you're approved to do. If I'm authorized to be an admin on this computer, for example, and you aren't, then I get to be a dictator, and you don't. All of these things overlap in one form or another, but at the end of the day, they're the reasons why Encryption is so excellent at protecting sensitive data.

The data that travels on our network is simply ones and zeros. The first step towards scrambling your data so no one can understand, is making a cipher, which requires an algorithm. If I saw a string of ones and zeros, I'd be like what in the world does this mean? Well if it were part of an HTTP segment, my web browser would know that it was "Unicode". That's pretty much numbers representing letters. There are different types of encryptions, a couple covered in pg. 362 one is "eXclusive OR" or "binary XOR" which works with letters and numbers. You can crack encryptions using word patterns, frequency analysis, or brute force. When running cleartext through a cipher algorithm using a key, you get ciphertext. For many years, many different algorithms have been used. The symmetric-key encryption/algorithm uses the same key for both encryption and decryption, meaning that you'd need the same key for both tasks. However, if that is not the case, then you need an asymmetric key, making it an asymmetric-key algorithm.  but all this encryption stuff goes way beyond the Network+ curriculum.

The differences among symmetric-key algorithms, are called block ciphers. They encrypt data in single "chunks". For example, if a word document had 100,000 bytes, one type of encryption would take 128-bit chunks and encrypt each one separately. The alternative is the "stream cipher", which will take a single-bit at a time and encrypt on the fly. The oldest TCP/IP symmetric-key algorithm is "Data Encryption Standard" (DES). There are several derivatives of DES like, 3DES, International Data Encryption Algorithm (IDEA), and Blowfish. On the streaming side, the only symmetric-key algorithm is Rivest Cipher 4 (RC4). After many years, those encryptions have become more vulnerable making the most used encryption,the "Advanced Encryption Standard" (AES). It uses a 128-bit block size, and 128-,192-, or 256-bit key size.

The main issue with Symmetric-key encryption is that if some guy gets a hold of a key while it's being sent, then he/she can access it without your knowledge. To fix this problem, two keys were used, one to encrypt and one to decrypt. This was known as "public-key cryptography". Ron Rivest and other guys made improvements to that, of which were called "Rivest Shamir Adleman" (RSA), literally just their last names put together. Here's how it works:

Imagine that Bob wanted to send Bailey an encrypted e-mail. Well SMTP cannot encrypt, so they need to create an encryption program themselves. Before Bob sends the email, he generates two keys. One of the keys is for his computer, and that's the private key, while the other key is sent to Bailey, and that's the public key. Those two keys are called a key pair. This algorithm works by encrypting data with a public key, and decrypting that same data with a private key. This way Bob can encrypt and send a message to Bailey, which can only be decrypted by Bailey's private key. If Bob wants to receive an e-mail message from Bailey, Bob must generate a key pair and send Bailey the public key. In a typical public-key cryptography setup, everyone has their own private key plus a copy of the public keys for secure communication. Before moving on, let's look at encryption and the OSI model:

- Layer 1: No common encryption done.
- Layer 2: Common are for encryption, using proprietary encryption dvices. These boxes scramble the data in an Ethernet frame, except the MAC address info. Devices or programs encode and decode the information.
- Layer 3: Only one common protocol encrypts at Layer 3: IPSec. IPSec is typically done via software that takes the IP packet and encrypts everything inside the packet, leaving on the IP addresses and a few other fields unencrypted.
- Layer 4: Neither TCP nor UDP offers encryption methods.
- Layer 5 and 6: No encryption done.
- Layer 7: Many applications have their own encryption, SSL/TLS are common Layer 7 standards.

To identify who's passing out the keys, that falls under nonrepudiation. As I said before, it just means that the receiver of the info knows that the sender is who they think it is. Nonrepudiation comes in several forms, but most uses math magic called "hash". A hash (cryptographic hash function) is a math function which uses a string of binary digits that results in a "checksum" or a "digest". A hash has a unique checksum. I already know what a checksum is. The most popular hash is "Message-Digest Algorithm version 5" (MD5). It's not the only one though, there's also Secure Hash Algorithm (SHA) which have two versions SHA-1 and SHA-2. Many things use hash, even SMTP.

A digital signature is another string of ones and zeroes that can only be generated by the sender. The person with the matching public key does something to the digital signature using the public key to verify it. When you're doing business with someone you don't know, you should try and verify the source. A certificate is a standard way of doing just that. I already know about this, but essentially you just go to the the top left of your browser where the "http://" is and usually https has certificates, but you'll be able to tell if the site is secure by viewing the certificate details under "Security" and seeing the SSL certificate it has. VeriSign is a very good one, for example. The way VeriSign would certify the web site is by acting as a root, giving the website a VeriSign signature. Through intermediate certificate authority between VeriSign's root and the user's certificate, a tree of certificate authorization is created. Together the organization is called "public-key infrastructure" (PKI). However, PKI does not necessarily have to be used for certificates. Digital certificates and asymmetric cryptography have a lot in common, because the certificates verify the exchange of public keys.

It is very important to know the different types of authentication available in TCP/IP networks. Now authorization is key, and we all know what that word means. Are you allowed, or not? Well in networking, you can provide many levels of authorization. To define these levels of access you  use an "access control list" (ACL). There are three types of ACL access models: mandatory (MAC), discretionary (DAC), and role based (RBAC). MAC is a security model in which every resource is assigned a lable that defines its security level. If you don't have the right level, you don't get access. Then DAC gives control to the owner of the resource to choose who gets access to the resource. Finally, RBAC decides who gets control to the resource based on their role in the network. Understand them for the Network+ exam!

Mike Meyers said that TCP/IP was never really meant for security. Authentication standards are some of the oldest standards in TCP/IP. Some are older than even the internet itself. Back in the days of dial-up, several types of authentication were used. "Point-to-Point Protocol" (PPP) gives the ability for two point-to-point devices to connect with a username and password, while negotiating a network protocol. Here are the five phases to PPP:

1. Link dead - This is a way of saying, there is no link yet. THe modem is turned off, nothing is going on. This is when the PPP conversations begin. The main player here is "Link Control Protocol (LCP). The LCP will get the connection to start up.

2. Link establishment: The LCP communicates with the LCP on the other side of the PPP link.

3. Authentication - This is when the authentication takes place, usually username and password.

4. Network layer protocol - PPP works on OSI Layer 3, it's mostly used on TCP/IP obviously, and it supports a bunch of ancient protocols.

5. Termination - Two ends of the PPP connection send each other termination packets, and the link is then closed.


PPP provided the first common method to get a server to request a username and password. To give an example, under PPP the side asking for the connection is the "initiation", and the other side is called the "authenticator" and holds a list of usernames and passwords. There are two methods to authenticate this. "Password Authentication Protocol" (PAP), but anyone who can tap the connection can learn the username and password, which means the security on that sucks. So everyone uses "Challenge Handshake Authentication Protocol" (CHAP), which is more secure. It relies on hashes and stuff. CHAP will keep repeating the entire process to prevent the attacks which PAP is vulnerable to. Quick note, Microsoft invented a better version of CHAP called MS-CHAP.

To better protect PPP standards were made called "Authentication, Authorization and Accounting" (AAA). The way it works is that during authentication a computer trying to connect to the network needs to give some kind of credentials to access the network. It's usually a username and password. Could also be a smart card, retinal scan, or a digital certificate, or a combo. Then once authenticated the computer processes that data and decides what permissions it gets, this is called authorization. Then accounting is basically keeping logs of all the logon attempts and other data. Once AAA became the norm, people created two standards of AAA.

The first one is "Remote Authentication Dial-In User Service" (RADIUS). It's the better known of the two AAA standards. It contains three devices: The RADIUS server which has access to the database with all the data user name and passwords, some Network Access Servers (NASs) which control the modems, and a group fo systems which dial into the network. To use RADIUS you need a a RADIUS server, many use "Internet Authentication Service" (IAS) in Microsoft environments. For Unix/Linux use FreeRADIUS. It uses UDP ports 1812, 1813, or 1645, 1646.  Then there's "Terminal Access Controller Access Control System Plus" (TACACS+) which was developed by Cisco and supports many routers and switches. The only real difference with RADIUS is that it uses TCP port 49, and it separates AAA into different parts. It uses hashes as well but it can also use Keberos.

Next, on a completely different note, we have Keberos, which has nothing to do with PPP. "Kerberos" is an authentication protocol, different from PPP. This protocol was made for security purposes, and was even adopted by Microsoft for its amazingness. One key component to Kerberos is the "Key Distribution Center" (KDC), no pun intended, the "Authentication Server" (AS), and the "Ticket-Granting Service" (TGS). When your client logs onto the domain, it requests the hash of the username and password to the AS. The AS compares it to its own, and if it matches, it will send a "Ticket-Granting Ticket" (TGT). From this point, the client is now authenticated, but not authorized. The client will then send the TGT to the TGS to be authorized. The TGS will then send a timestamped service ticket or a "token" back to the client. This token is the key to access any resource in the domain. Timestamping is important because Kerberos will contiunally ask for a new token every 8 hours.

Once the whole token thing got popular, people made standards which allowed two devices to authenticate. The first prominent one was the "Extensible Authentication Protocol" (EAP). It is a PPP wrapper with EAP applications. There are many variations.

- EAP-PSK (Personal Shared Key)
- EAP-TLS (Transport Layer Security)
- EAP-TTLS (Tunneled TLS)
- EAP-MS-CHAPv2
- EAP-MD5
- LEAP (Lightweight) [Most Common]


Completion Status: 59%
Pages Left:
- Book: 279 pages

Friday, February 5, 2016

Week 18: Day 049 - Essential File Management Tools


Hello, today I will be continuing my blog posts, after a long snowstorm which kept us out of school for two weeks. In this chapter, I will take the exercises put it on here, and then comment on what I do. In the end, I will answer the review questions.

To understand the way that the Linux file system is organized, knowing the concept of mounting is important. Sometimes it's not a good idea to store everything in one place. It decreases system performance, and it makes it harder to make additional storage space available. So there are actually advantages to mounting. Let's look at the directories. "/boot" contains files required for booting your computer. "/var" should be put on a dedicated device, because it will take up storage on your server. "/home" is the directory which contains user directories, and should be placed on a dedicated device like /var, for security reasons. "/usr" contains the Operating System files. The mount command shows all of the mounted devices. "df -Th" shows available disk space on the mounted devices. The command "findmnt" does the same thing, except in a nicer fashion. The simplest, best command which shows mounted devices is "df -hT".

Moving on, there's also a thing called "wildcards" which I should know. The * is basically everything. If you were to do ls * it would show you every file in your working directory.

In this exercise, you learn how to work with directories.

1. Open a shell as a normal user. Type cd. Next, type pwd, which stands for print working directory. You’ll see that you are currently in your home directory, a directory with the name /home/<username>.

They were right, that happened.

2. Type touch file1. This command creates an empty file with the name file1 on your server. Because you currently are in your home directory, you can create any file you want to.

It created that file in my home directory.

3. Type cd /. This changes the current directory to the root (/) directory. Type touch file2. You’ll see a “permission denied” message. Ordinary users can create files only in directories where they have the permissions needed for this.

I was denied permission. You must be sudo to do anything in the root directory I guess.

4. Type cd /tmp. This brings you to the /tmp directory, where all users have write permissions. Again, type touch file2. You’ll see that you can create items in the /tmp directory (unless there is already a file2 that is owned by somebody else).

I'm in the temp directory. Users of all kinds can do whatever they want there, cause it's temporary!

5. Type cd without any arguments. This command brings you back to your home directory.

Quite obvious.

6. Type mkdir files. This creates a directory with the name files in the current directory. The mkdir command uses the name of the directory that needs to be created as a relative pathname; it is relative to the position you are currently in.

Already knew that.

7. Type mkdir /home/$USER/files. In this command, you are using the variable $USER, which is substituted with your current username. The complete argument of mkdir is an absolute filename to the directory files you are trying to create. Because this directory already exist, you’ll get a “file exists” error message.

What was the point of that? It already exists!

8. Type rmdir files to remove the directory files you have just created. The rmdir command enables you to remove directories, but it works only if the directory is empty and does not contain any files.

Codecademy taught me this.

Since the majority of stuff I already learned in Codecademy, I will skip a bunch of this. Continuing though, there are things called links. They create links, similarly to creating a shortcut on Windows. There are two different types of links, hard links and symbolic links.

In this exercise, you work with symbolic links and hard links:

1. Open a shell as a regular (nonroot) user.

Use screen for this!

2. From your home directory, type ln /etc/passwd .. (Make sure that the command ends with a dot!) This command gives you an “operation not permitted” error because you are not the owner of /etc/passwd.

Do sudo for this, it requires that kind of permission.

3. Type ln -s /etc/passwd .. (Again, make sure that the command ends with a dot!) This works; you do not have to be the owner to create a symbolic link.

They're right, no owner privileges needed.

4. Type ln -s /etc/hosts. (This time with no dot at the end of the command.) You’ll notice this command also works. If the target is not specified, the link is created in the current directory.

I've created a symbolic link with hosts. It highlights the directory in a certain color.

5. Type touch newfile and create a hard link to this file by using ln newfile linkedfile.

Now both words within the home directory are highlighted in blue.

6. Type ls -l and notice the link counter for newfile and linkedfile, which is currently set to 2.

7. Type ln -s newfile symlinkfile to create a symbolic link to newfile.

This worked.

8. Type rm newfile.

Interestingly, linkedfile remains, while symlinkedfile is there, but it doesn't have any file to "shortcut", so it's kind of just dead.

9. Type cat symlinkfile. You will get a “no such file or directory” error message because the original file could not be found.

True.

10. Type cat linkedfile. This gives no problem.

Already said that.

11. Type ls -l and look at the way the symlinkfile is displayed. Also look at linkedfile, which now has the link counter set to 1.

Yes, this means that linkedfile is just itself, it has no connection newfile, since newfile is dead/deleted. So basically linkedfile is a clone of newfile, and newfile still exists, but inside of linkedfile.

12. Type ln linkedfile newfile.

This brought newfile, back to life!

13. Type ls -l again. You’ll see that the original situation has been restored.

Final note, I will focus an entire blog post on Tar since I have to make a presentation about it. But for now This is where it ends. Time for Review Questions!

Review Questions

1. Which directory would you go to if you were looking for configuration files?

/etc

2. What command enables you to display a list of current directory contents, where the newest files are listed first?

ls -alt

3. Which command enables you to rename the file myfile to your file?

mv myfile yourfile

4. Which command enables you to wipe an entire directory structure, including all of its contents?

rm -rf

5. How do you create a link to the directory /tmp in your home directory?

ln -s /tmp

6. How would you copy all files that have a name that starts with a, b, or c from the directory /etc to your current directory?

cp /etc/[abc]

7. Which command enables you to create a link to the directory /etc in your home directory?

ln -s /etc ~

8. What is the safe option to remove a symbolic link to a directory?

rm symlink is the SAFEST.

9. How do you create a compressed archive of the directories /etc and /home and write that to /tmp/etchome.tgz?

tar zcvf /tmp/etchome.tgz /etc /home
10. How would you extract the file /etc/passwd from /tmp/etchome.tgz that you have created in the previous step?

tar xvf /tmp/etchome.tgz /etc/passwd

Wednesday, January 20, 2016

Week 17: Day 048 - Network Naming


Welcome to my new format of doing blog entries on Network+. You should expect to see up to nine posts until the end of the month on this. I think this will all be done by the end of January, which is great for me. I skipped a week of blog posts since I was really busy learning command line for Linux. This chapter is about Network Naming. You have a name for your network as a convenience, and to be able to communicate over the internet, there is a translation that takes place so the network can switch a regular IP to a domain and vice versa. This is called DNS, which is the first topic of coverage; let's get right into it.

Domain Name System (DNS) is a name resolution system which translates IPs into domain names, so I don't have to go on http://93.325.525 to open a webpage. In the early days of TCP/IP a system called HOSTS was used. A HOSTS file had a list of IPs of networks on the internet. There weren't that many computers back then. Here's an example of what it looks like:

192.168.2.1     fred
201.32.16.4     school2
123.21.44.16   server

The HOSTS file on every system was update at 2 AM every morning. If you wanted to contact fred, HOSTS would just look for his name, and contact him. When the internet grew, it became impractical. Important thing to note the # on HOSTS file meant the line would be a comment, kind of like Python does. Long story short, to see how it works, ping a random website in your CMD (command prompt) and take the IP of what you're pinging, put it into a text editor, put a name beside it like, "bob", then save it as a HOSTS file. Afterwards type "ping bob" into cmd, and it will show ping that address.

The way Domain Name System (DNS) was created, was through the need to replace HOSTS. What ended up being the normal DNS was a method in which the top dog DNS systems delegated jobs to other ones which delegated jobs to ones below them. As you can see, it's very bureaucratical. These systems run a special program and are called, "DNS servers". The top dog ones are super computers around the world, which work as a team known collectively as "DNS Root Servers". DNS root has the complete name resolution table, while the resolution work is delegated to other DNS servers. Under the DNS root, the next part of the hierarchy is "Top-Level Domain" (TLD) names. They are the famous .com, .org, .net, etc. names at the end of a URL. Then under that is Second-level DNS which support individual computers. In essence what happens is that the domain is masking the IP address of the individual, and it is at second-level in which this happens. Now in terms of the DNS hierarchical name space, it's basically a tree structure which contains all possible names within a single system. HOSTS used "flat name space", which is just a big unorganized list. I already understand the hierarchical system, so no need to go over that (but if I did, pg. 321-323). Now in terms of how the name space works, it works a lot like the file system on a computer. I've been playing around with Terminal in my other course, and it's taught me a lot about hierarchical file systems on Linux. This is essentially the same on Windows. But in the world of DNS, you start out with the "root", then the "domain", and then the "host names". If you wanted to use DNS in your TCP/IP network you could, it's not exclusive to the internet. That would be called an "intranet". Regardless, the DNS naming convention is the opposite to that of a computer. The complete DNS name + host and domain is called "Fully Qualified Domain Name" (FQDN) which is written with the root on the far right, followed by domains left of the root, and host names on far left. So basically on Windows if you went for example C:\Program Files\Steam\SteamApps it would be the complete opposite in the world of DNS instead it would be reversed to SteamApps/Steam/Program Files/C: which may seem weird, but deal with it!

Then there's the "name servers". Here are the three key ones:
- DNS Server: A DNS server is a computer running DNS server software
- Zone: A zone is a container for a single domain that gets filled with records.
- Record: A record is a line in the zone data that maps an FQDN to an IP address.

Systems with DNS server software contain DNS information. A network usually has one DNS server for the entire network. On Pg. 327 there is an example of authoritative DNS server which lists all host names on the domain and their corresponding IP addresses.You can have a single DNS server as authoritative. Every DNS server knows the name and address of the "Start of Authority" (SOA). If Mikes-PC.Support.Houston needs the IP address of Server1.Dallas then the network has to choose an authoritative DNS server. Say that DNS1.Dallas is the authoritative for all Dallas domains and DNS.1 is in charge of all Houston domains. As root, Houston server has a listing for SOA in the Dallas domain, but does not know the IP address for every system on it. The requesting sytem will ask the Dallas DNS server for the IP address of the system it needs. There are advantages to the hierarchy, as almost all web servers are called www. only the DNS naming appends domain names to the server names. No to machines have the same FQDN because it must fit within the worldwide hierarchy.

To access the internet you don't have to use DNS. It makes things easier to do, however. Browsers accept urls like www.google.com, but it converts it into an IP to access the webpage. Moving on, to broadcast for name resolution, the host sends a message to all machines on the network, requesting a respsonse from another system on another network. The broadcast stops at the router, since routers don't forward broadcasts. Now the final way of resolving a name to an IP address is, of course, to use DNS. To request the IP address of www.micrsofot.com, for example, your PC needs the IP address of its DNS server. You have to enter its DNS info int your system, by using the "TCP/IPv4 Properties" dialog box. I've used it before. Enter what Mike has down on pg. 332 and see what happens! Every OS has a tool like the one on Microsoft. On Ubuntu it's "Network Configuration Utility". You can verify your settings on Command Prompt through "ipconfig /all" and on Linux with "cat /etc/resolv.conf".

The DNS server receives this request for the IP of www.microsoft.com from your client. Your DNS server will check the cache of a previous FQDN to see if www.microsoft.com is there. Let's say it isn't. Your DNS server needs to find it. It may not know the address of for www.microsoft.com but it knows 12 root name server operators. Those know all the addresses for top level domains. The root servers will send your DNS server an IP address for a .com server. But that .com DNS server doesn't know the address for www.microsoft.com either! But it knows the IP of just microsoft.com. Finally, we know that microsoft.com will know the IP address to www.microsoft.com (finally.)

I'm skipping a section of the chapter which doesn't cover much for the test. Let's go straight to Troubleshooting DNS. Most DNS problems result from an issue with the client. How do we know? DNS servers rarely go down. Everything you do on an IP network depends on DNS to find the right system to communicate to for whatever job an application needs to do. FTP clients use DNS for their servers, and web browsers use DNS to find web servers. The first clue to expect the rare occasion in which a DNS server is at fault, is when you see a "server not found" error. To test, flush out DNS caches by typing into cmd "ipconfig /flushdns". If you can't use your web browser for testing, just use the "ping" command. Run ping from cmd, followed by a famous website. An example would be, "www.google,com". If you get a "request timed out" message, that's okay, you just want to see if the DNS is resolving FQDNs into IP addresses. If you get a "server not found" error, you'll need to pign again with just an IP address. The IP for Google is 74.125.95.99 try memorizing that! If ping works with the IP address, but not the website, then it's a DNS problem! Simple.

In addition, NetBIOS is a system created by Microsoft for Windows (older ones), and was invented in the 80s. Basically Microsoft kept adapting their NetBIOS to work with TCP/IP, and made it DNS-compatible. To reduce overhead, they created a special text file called LMHOSTS, which is like Hosts except it uses "Windows Internet Name Service" (WINS) for name resolution. Two reasons to use a WINS server would be to reduce overhead broadcasts, and to enable NetBIOS name resolution across routers. Why routers? Well routers kill broadcasts, so they have that in common. To keep Windows systems connected to your WINS server from broadcasting, you'd use a WINS "proxy agent", to forward WINS broadcasts to the WINS server. To configure the WINS client, you only need to configure the IP address of a WINS server in its WINS settings under Network Properties. Then Windows will just look for the WINS server to register its NetBIOS name. I'll skip the troubleshooting bit for this, since it's unimportant to the test.

Finally, here's how to diagnose TCP/IP networks:

1. Diagnose the NIC by pinging the loopback, typing into cmd: "ping 127.0.0.1" or "ping localhost"

2. Diagnose locally.

3. Check IP address and Subnet Mask

4. Run netstat by typing into cmd: "netstat"

5. Run netstat -s (same as before except with -s modifier)

6. Diagnose to the gateway.

7. Diagnose to the Internet.

Completion Status: 52.5%
Pages Left:
- Book: 322 pages


Questions:
1. NetBIOS uses what type of name space?

2. The DNS root director is represented by what symbol?
/ (forward slash)

3. What command do you use to see the DNS cache on a Windows system?
ipconfig /displaydns

4. The users on your network haven't been able to connect to the server for 30 minutes. You check and reboot the server, but you're unable to ping either its own loopback address or any of your client systems. What should you do?
Replace the NIC, cause it sucks and it failed.

5.  A user calls to say she can't see the other systems on the network when she looks in My Network Places. You are not using NetBIOS. What are your first two troubleshooting steps? (Select Two)
Ping the loopback address.
Ping several neighboring systems using both DNS names and IP addresses.

6. What is checked first when trying to resolve an FQDN to an IP address?
HOSTS file

7. Which type of DNS record is used by mail servers to determine where to send e-mail?
MX record

8. Which command enables you to eliminate DNS cache?
ipconfig /flushdns

9. Which tool enables you to query the functions of a DNS server?
nslookup

10. Where does a DNS server store the IP addresses and FQDNs for the computers within a domain?
Forward lookup zone.