Padheye.com : Discover Excellence

The Path To Discover Excellence

Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Saturday, 14 March 2020

March 14, 2020

What is GitHub?

 

What is GitHub?

One of the best ways to share what you’ve been learning with other people is to put your code on GitHub. GitHub is both a website and a service that facilitates software development by allowing you to store your code in containers, called repositories, and by tracking changes made to your code. In addition, it offers a hosting service and tools to build, test, and deploy code.

GitHub uses Git, a version-control development tool, to manage your projects by tracking changes to files and allowing multiple people to work on the same project. Although both GitHub and Git have similar names, to be clear, GitHub is a service while Git is a development tool that can be used by you outside of and without GitHub.

Why is GitHub important?

There are many reasons why knowing about GitHub is important for your personal growth as a developer. A large part of the appeal of Github is the access it grants developers to the massive community of developers around the world who openly share their code, projects, and software development tools with each other. Therefore, if you want to continue working on your Git skills, creating your programming portfolio, or finding work, GitHub can help.

How to Sign Up for an Account

Now that you’re aware of GitHub’s benefits, you probably want to sign up for an account and try it out yourself. First navigate to the home page of the GitHub website, https://github.com.

In the upper-right corner click on the Sign Up button, as outlined in this screenshot:

Home page of the GitHub website with the mouse pointing at the **Sign Up** button which is highlighted

Create Your Account

You’ll see a page with a form under the heading “Create your account”.

Create your account page of GitHub website

Fill in each field for username, email address, and password. Choosing a username and email are especially important! Be sure to read through the following tips.

Username

When choosing a username, it’s wise to choose one you wouldn’t mind future employers or colleagues seeing. A combination of your first and last name like firstnamelastname or using initials, like i_lastname, are good because they make it easy to find you on GitHub or identify you when you make pull requests or reviews. Remember, you’re likely using this account to share or access code.

Also be aware that usernames are first come, first serve and may not be available if someone else already claimed the username. Additionally, usernames may only contain alphanumeric characters and hyphens are not allowed at the beginning or end.

Email Address

Like with usernames, pick an email that you’re comfortable sharing with peers and potential hires. Because of the way Git works, it’s important to note that your email can be exposed publicly when you make a pull request or merge in code to a repository, making it visible to anyone looking through your projects. When you sign up for a new GitHub account, your email address is hidden by default.

Finish Creating Your Account

Lastly, fill out the password field. When you’re done filling out the various fields, verify your account. You will receive an email from GitHub prompting you to verify.

Once you see a green checkmark, click on the blue Create an Account button.

Go to your email account, find the email from GitHub, and click on the button inside it to finish creating your account.

Settings

After successfully creating an account, you should see a page display asking you “What do you want to do first?” Go through the steps to complete your own set up process. You should now see a welcome page:

GitHub Welcome page

You can either answer the optional questions or move on by clicking on the Complete setup button to finish creating your account.

Your browser should display a personal dashboard with a section for your projects and some messages:

GitHub personal dashboard

Now you have your own GitHub account! You can continue to customize your account by:

That’s it, you now have your very own GitHub account. 🎉

Recap

With more people working remotely and with teams distributed across different countries and timezones, GitHub and Git can be valuable tools for collaborating on projects.. You can also use GitHub to work on any file-based project such as writing documentation.

Let’s review what you did in this article:

  • Learned that GitHub is both a website and service for storing and sharing code
  • Learned that Github that uses Git to facilitate software development by tracking changes

  • Created your own GitHub account

  • Enabled security features like keeping your email private and turning on two-factor authentication

Going Further

Once you feel comfortable navigating GitHub consider doing the following:

  • Change your profile settings to receive job listings.

  • Add information about yourself in your profile, including an avatar, bio, location, etc.

  • Set your status in your profile to let people know what you’re doing.

  • If you aren’t already familiar with Git or just need a refresher on how to use it, then you might want to move on to the Codecademy Git course to set up your first project repository.

  • If you feel comfortable enough with Git, you can take a look at this article on GitHub Pages, GitHub’s hosting service that allows you to create a personal website, “project site”, based on a repository to make your portfolio site.

  • If you’re interested in paid accounts, GitHub has a pricing page with various types of accounts and features you can look at.

Show the world what you can do with your code!

Friday, 13 March 2020

March 13, 2020

How to setup firewall in Linux?

 What is a Firewall?

Firewall is a network security system that filters and controls the traffic on a predetermined set of rules. This is an intermediary system between the device and the internet.

NOTE:- If you already know about the working of Firewall in Linux and just want to know the Commands, then please go the end of the tutorial.

How the Firewall of Linux works :
Most of the Linux distro’s ship with default firewall tools that can be used to configure them. We will be using “IPTables” the default tool provided in Linux to establish a firewall. Iptables is used to set up, maintain and inspect the tables of the IPv4 and IPv6 packet filter rules in the Linux Kernel.

Note:- All the command below need sudo privileges.

Chains :-

Chains are a set of rules defined for a particular task.



We have three chains(set of rules) which are used to process the traffic:-

  1. INPUT Chains
  2. OUTPUT Chains
  3. FORWARD Chains

1. INPUT Chains
Any traffic coming from the internet(network) towards your local machine has to go through the input chains. That means they have to go through all the rules that have been set up in the Input chain.

2. OUTPUT Chains
Any traffic going from your local machine to the internet needs to go through the output chains.

3. FORWARD Chain
Any traffic which is coming from the external network and going to another network needs to go through the forward chain. It is used when two or more computers are connected and we want to send data between them.

Different Policies :-

There are three actions which the iptables can perform on the traffic

  1. ACCEPT
  2. DROP
  3. REJECT

1. ACCEPT
When traffic passes the rules in its specified chain, then the iptable accepts the traffic.
That means it opens up the gate and allows the person to go inside the kingdom of Thanos.

2. DROP
When the traffic is unable to pass the rules in its specified chain, the iptable blocks that traffic.
That means the firewall is closed.

3. REJECT
This type of action is similar to the drop action but it sends a message to the sender of the traffic stating that the data transfer has failed.
As a general rule, use REJECT when you want the other end to know the port is unreachable’ use DROP for connections to hosts you don’t want people to see.




NOTE:-
You need to keep in mind a simple rule here:-
The Rules you set in the iptables are checked from the topmost rules to the bottom. Whenever a packet passes any of the top rules, it is allowed to pass the firewall. The lower rules are not checked. So be careful while setting up rules.

Basic iptables commands :

1. List the current rules of iptable :

To list the rules of the current iptables:-

sudo iptables -L

The Output would be:-

As you can see, we have three chains (INPUT, FORWARD, OUTPUT). We can also see column headers, but they are no actual rules. This is because most of the Linux come with no predefined rules.

Let see what each column mean.

Target:-
This defines what action needs to be done on the packet (ACCEPT,DROP,etc..)

prot:-
This defines the protocol (TCP,IP) of the packet.

source:-
This tells the source address of the packet.

destination:-
This defines the destination address of the packet

2. Clear the rules :

If you ever want to clear/flush out all the existing rules. Run the following command:-

sudo iptables -F

This will reset the iptables.

3. Changing the default policy of chains :

sudo iptables -P Chain_name Action_to_be_taken

As you can see in the above picture, the default policy of each of the chain is ACCEPT.

For eg:
If you see the forward chain, you will see “Chain FORWARD (policy ACCEPT)”.This means your computer allows any traffic to be forwarded to another computer.

In order to change the policy of forwarding to drop:-

sudo iptables -P FORWARD DROP

The above command will stop any traffic to be forwarded through your system. That means no other system can your system as an intermediary to pass the data.

Making your First Rule :

1. Implementing a DROP rule :

We’ll now start building our firewall policies.We’ll first work on the input chain since that is where the incoming traffic will be sent through.

Syntax:-

sudo iptables -A/-I chain_name -s source_ip -j action_to_take

We’ll take an example to understand the topic.

Let’s assume we want to block the traffic coming from an IP address 192.168.1.3. The following command can be used:-

sudo iptables -A INPUT -s 192.168.1.3 -j DROP

This may look complicated, but most of it will make sense when we go over the components:-
-A INPUT :-

The flag -A is used to append a rule to the end of a chain. This part of the command tells the iptable that we want to add a rule to the end of the INPUT chain.

-I INPUT:-
In this flag the rules are added to the top of the chain.

-s 192.168.1.3:-
The flag -s is used to specify the source of the packet. This tells the iptable to look for the packets coming from the source 192.168.1.3

-j DROP
This specifies what the iptable should do with the packet.

In short, the above command adds a rule to the INPUT chain which says, if any packet arrives whose source address is 192.168.1.3 then drop that packet, that means do not allow the packet reach the computer.

Once you execute the above command you can see the changes by using the command:-

sudo iptables -L

The Output would be:-

2. Implementing a ACCEPT rule :

If you want to add rules to specific ports of your network,then the following commands can be used.

Syntax:-

sudo iptables -A/-I chain_name -s source_ip -p protocol_name --dport port_number -j Action_to_take

-p protocol_name:-
This option is used to match the packets that follow the protocol protocol_name.

-dport port_number:
This is option is available only if you give the -p protocol_name option. It specifies to look for the packets that are going to the port “port_number”.

Example:-
Let’s say we want to keep our SSH port open (we will assume in this guide that the default SSH port is 22) from the 192.168.1.3 network we blocked in the above case. That is we only want to allow those packets coming from 192.168.1.3 and which wants to go to the port 22.

What do we do:-
Let’s try the below command:-

sudo iptables -A INPUT -s 192.168.1.3 -p tcp --dport 22 -j ACCEPT

The above command says looks for the packets originating from the IP address 192.168.1.3, having a TCP protocol and who wants to deliver something at the port 22 of my computer. If you find those packets then Accept them.

The Output for the command is:-

But, There is a problem with the above command. It actually does not allow the packets. Can You Guess What it is?
HINT:- It is related to the way the rules are accessed.

Remember as we discussed earlier, The Rules you set in the iptables are checked from the top to the bottom. Whenever a packet is processed to one of the top rules, it is not checked with the lower rules.

Okay! Here’s The Answer:-
In our case, The packet was checked with the topmost rule, which says that the iptable must drop any packet coming from 192.168.1.3. Hence once the packet got accessed through this rule, it did not go to the next rule which allowed packets to the port 22. Therefore it failed.

What could be done?
The easiest answer is, Add the rule to the top of the chain. All you need to do is change the -A option to -I option. ( In our scenario we first delete the rule [refer the next section] added in the above section and then add the below rule again )

The command to do that is:-

sudo iptables -I INPUT -s 192.168.1.3 -p tcp --dport 22 -j ACCEPT

Now check the iptable configuration using -L command. The output would be:-

Therefore, any packet coming from 192.168.1.3 is first checked if it is going to the port 22 if it isn’t then it
is run through the next rule in the chain. Else it is allowed to pass the firewall.

Now that you have understood how to block and accept the incoming traffic let’s see how to delete rules:-

3. Deleting a rule from the iptable :

Syntax:-

sudo iptables -D chain_name rule_number

Example:-
If we want to delete the rule which accepts the traffic to port 22 and which we have added in the previous section, then:-

sudo iptables -D INPUT 1

Remember the rule number starts from 1
The Output :-

4. Saving your configuration :

This part is unnecessary if you are implementing it on a personal computer which is not a server, but if
you are implementing a firewall on a server, then there are high chances that your server might get corrupted and
you might lose all your data. So, it’s always better to save your configurations.

There are a lot of ways to do this, but the easiest way I find is with iptables-persistent package. You can download the package from Ubuntu’s default repositories:

sudo apt-get update
sudo apt-get install iptables-persistent

Once the installation is complete, you can save your configuration using the command:-

sudo invoke-rc.d iptables-persistent save

Well, this is the end of the tutorial.
Let’s just brief up all the commands we have learned so far:-

Summary :

1. List the current rules of iptables:

sudo iptables -L

2. To change the default policy:

sudo iptables -P Chain_name Action_to_be_taken

Example:-

sudo iptables -P FORWARD DROP

3. To clear/flush all the rules

sudo iptables -F

4. To append a rule at the end of the chain:

sudo iptables -A

5. To append a rule at the start of the chain:

sudo iptables -I

6. To implement a ACCEPT rule:-

sudo iptables -A/-I chain_name -s source_ip -j action_to_take

Example:-

iptables -A INPUT -s 192.168.1.3 -j ACCEPT

7. To implement a DROP rule:-

sudo iptables -A/-I chain_name -s source_ip -j action_to_take

Example:-

iptables -A INPUT -s 192.168.1.3 -j DROP

8. Implementing rules on specific ports/protocols:-

sudo iptables -A/-I chain_name -s source_ip -p protocol_name --dport port_number -j Action_to_take

Example:-

sudo iptables -I INPUT -s 192.168.1.3 -p tcp --dport 22 -j ACCEPT

9. To delete a rule:-

sudo iptables -D chain_name rule_number

Example:-

sudo iptables -D INPUT 1

10. To save the configuration:-

sudo invoke-rc.d iptables-persistent save

And that’s the end of the tutorial. We have seen all the necessary commands that you need to implement a firewall on your local machine. There are various other actions we can make our firewall do, but it is impossible to cover all of those in a single article. So, I will be writing a few more articles explaining all the commands. Until then, Keep experimenting!!

Monday, 18 March 2019

March 18, 2019

Introduction of Firewall in Computer Network

 A firewall is a network security device, either hardware or software-based, which monitors all incoming and outgoing traffic and based on a defined set of security rules it accepts, rejects or drops that specific traffic.

Accept : allow the traffic
Reject : block the traffic but reply with an “unreachable error”
Drop : block the traffic with no reply

A firewall establishes a barrier between secured internal networks and outside untrusted network, such as the Internet.

History and Need for Firewall



Before Firewalls, network security was performed by Access Control Lists (ACLs) residing on routers. ACLs are rules that determine whether network access should be granted or denied to specific IP address.
But ACLs cannot determine the nature of the packet it is blocking. Also, ACL alone does not have the capacity to keep threats out of the network. Hence, the Firewall was introduced.

Connectivity to the Internet is no longer optional for organizations. However, accessing the Internet provides benefits to the organization; it also enables the outside world to interact with the internal network of the organization. This creates a threat to the organization. In order to secure the internal network from unauthorized traffic, we need a Firewall.

How Firewall Works

Firewall match the network traffic against the rule set defined in its table. Once the rule is matched, associate action is applied to the network traffic. For example, Rules are defined as any employee from HR department cannot access the data from code server and at the same time another rule is defined like system administrator can access the data from both HR and technical department. Rules can be defined on the firewall based on the necessity and security policies of the organization.
From the perspective of a server, network traffic can be either outgoing or incoming. Firewall maintains a distinct set of rules for both the cases. Mostly the outgoing traffic, originated from the server itself, allowed to pass. Still, setting a rule on outgoing traffic is always better in order to achieve more security and prevent unwanted communication.
Incoming traffic is treated differently. Most traffic which reaches on the firewall is one of these three major Transport Layer protocols- TCP, UDP or ICMP. All these types have a source address and destination address. Also, TCP and UDP have port numbers. ICMP uses type code instead of port number which identifies purpose of that packet.

Default policy: It is very difficult to explicitly cover every possible rule on the firewall. For this reason, the firewall must always have a default policy. Default policy only consists of action (accept, reject or drop).
Suppose no rule is defined about SSH connection to the server on the firewall. So, it will follow the default policy. If default policy on the firewall is set to accept, then any computer outside of your office can establish an SSH connection to the server. Therefore, setting default policy as drop (or reject) is always a good practice.

Generation of Firewall

Firewalls can be categorized based on its generation.

  1. First Generation- Packet Filtering Firewall :  Packet filtering firewall is used to control network access by monitoring outgoing and incoming packet and allowing them to pass or stop based on source and destination IP address, protocols and ports. It analyses traffic at the transport protocol layer (but mainly uses first 3 layers).
    Packet firewalls treat each packet in isolation. They have no ability to tell whether a packet is part of an existing stream of traffic. Only It can allow or deny the packets based on unique packet headers.

    Packet filtering firewall maintains a filtering table which decides whether the packet will be forwarded or discarded. From the given filtering table, the packets will be Filtered according to following rules:

    1. Incoming packets from network 192.168.21.0 are blocked.
    2. Incoming packets destined for internal TELNET server (port 23) are blocked.
    3. Incoming packets destined for host 192.168.21.3 are blocked.
    4. All well-known services to the network 192.168.21.0 are allowed.
  2. Second Generation- Stateful Inspection Firewall : Stateful firewalls (performs Stateful Packet Inspection) are able to determine the connection state of packet, unlike Packet filtering firewall, which makes it more efficient. It keeps track of the state of networks connection travelling across it, such as TCP streams. So the filtering decisions would not only be based on defined rules, but also on packet’s history in the state table.
  3. Third Generation- Application Layer Firewall : Application layer firewall can inspect and filter the packets on any OSI layer, up to the application layer. It has the ability to block specific content, also recognize when certain application and protocols (like HTTP, FTP) are being misused.
    In other words, Application layer firewalls are hosts that run proxy servers. A proxy firewall prevents the direct connection between either side of the firewall, each packet has to pass through the proxy. It can allow or block the traffic based on predefined rules.

    Note: Application layer firewalls can also be used as Network Address Translator(NAT).

  4. Next Generation Firewalls (NGFW) : Next Generation Firewalls are being deployed these days to stop modern security breaches like advance malware attacks and application-layer attacks. NGFW consists of Deep Packet Inspection, Application Inspection, SSL/SSH inspection and many functionalities to protect the network from these modern threats.

 Types of Firewall

Firewalls are generally of two types: Host-based and Network-based.

  1. Host- based Firewalls : Host-based firewall is installed on each network node which controls each incoming and outgoing packet. It is a software application or suite of applications, comes as a part of the operating system. Host-based firewalls are needed because network firewalls cannot provide protection inside a trusted network. Host firewall protects each host from attacks and unauthorized access.
  2. Network-based Firewalls : Network firewall function on network level. In other words, these firewalls filter all incoming and outgoing traffic across the network. It protects the internal network by filtering the traffic using rules defined on the firewall. A Network firewall might have two or more network interface cards (NICs). A network-based firewall is usually a dedicated system with proprietary software installed.

Both types of firewall have their own advantages.

References:
https://en.wikipedia.org/wiki/Firewall_(computing)
https://www.cisco.com/c/en_in/products/security/firewalls/what-is-a-firewall.html
http://nptel.ac.in/courses/106105084/31

   
This article is contributed by Abhishek Agrawal. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Sunday, 17 March 2019

March 17, 2019

Which Control Panel Should You Choose?

 

Which Control Panel Should You Choose?

Deciding what’s the best web hosting control panel comes down to what you require. All of the control panels come with their own set of advantages and disadvantages. We’ll go over the top 10 web hosting control panels and discuss their pros and cons to help you understand what’s perfect for you.

By using control panels you can simplify your tasks for the management of:

  • Domains
  • Databases
  • Backups
  • Content Management Systems (CMS)
  • SSL Certificates
  • Emails
  • and more…

Now that you know the need for a control panel, let’s look at the top 10 web hosting control panels!

Quick List of the Best Web Hosting Control Panels

  1. cPanel – Best Linux-only web hosting control panel for users who want a control panel that’s tried and tested over the years
  2. Plesk – Perfect Windows/Linux web hosting control panel for users who want a decent UI
  3. CyberPanel – Web hosting control panel that focuses on webpage optimization and faster loading with high-speed caching
  4. Webmin – The best web hosting control panels for customization and advanced features.
  5. Direct Admin – Easy to use, fast, stable, and cost-effective web hosting control panel.
  6. Kloxo-MR – For users who’ve used Kloxo before, Kloxo-MR is a spin-off with added features
  7. Ajenti – Best low-cost and entry-level Python-based web hosting control panel
  8. Sentora – Best web hosting control panel for users looking for a modular system
  9. Froxlor – Clean interface, integrated ticketing, and reseller customer support system
  10. STORM Control Panel – A control panel which is managed by the company and you only have to connect to your server

Wednesday, 13 March 2019

March 13, 2019

Developing a Linux based shell

 What is a shell?

It’s the visible part of an operating system that users interact with, users interact with the operating system by providing commands to the shell, which in turn interprets these commands and executes them.

The following image shows the simplified execution process, in which the shell receives input,
passes it to the lexical analyzer(will be discussed in detail) which will create tokens, then the output of the lexical analyzer will be passed to a parser which checks it for syntax errors and executes the assigned semantic actions(this builds the command table), and finally when the parser reaches a certain point the table will be executed.

The shell will be implemented in 3 components as shown in the architecture diagram below:

Architecture diagram



1. Lexical Analyzer
The first part in input parsing is the lexical analysis stage where the input is read character by character to form tokens, we will be using a command called lex to build our file, in this file we will define our pattern followed by the: token name the lexical analyzer will read the input character by character and when the pattern matches the string on the left it will be converted to the string on the right.
ex:

Command input: ls -al

The parser will read l then s form a token called WORD then it will read – and characters (al) and form an OPTION the output will be WORD OPTION, this output will be passed to the parser to check if there is a syntax error.

1. “#” : IO
2. [ 1 ]?”>” : IO
3. “” : IO
5. [ 1]?”>>” : IO
6. [ 1-2]”>&”[1-2 ] : IOR
7. “|” : PIPE
8. “&” : AMPERSAND
9. [ ]”-“[a-zA-Z0-9]* : OPTION
10. [ ]”–“[a-zA-Z=a-zA-Z]* : OPTION2
11. \%\=\+\’\”\(\)\$\/\_\-\.\?\*\~a-zA-Z0-9]+ : WORD

The grammar above consists of 11 tokens these tokens form when the input meets the token description.
The IO token is either formed by a # character or a ‘>’ which could be preceded by number one (maximum one time) or ‘ which we introduced as a new token replacing the error redirection token, another form of IO is using ‘>>’ which could be preceded by number one (maximum one time) and finally ‘>&’ which is an IOR and could be preceded and/or followed by either one or two.

The pipe and ampersand tokens are formed at ‘|’ and ‘&’ respectively, the option token is formed when there’s a hyphen preceded by space and followed by any alphabetic character or number.
The option2 token is formed when there are two hyphens preceded by space and followed by any alphabetic character.
The WORD Token could be formed by alphabetic characters, numbers and the following characters %, =, +, ‘, “, (, ), $, /, _, -, ., ?, *, ~

2. Parser
After the tokens have been formed from the input, tokens pass as a stream to the parser which parses the input to detect syntax error and execute the assigned semantic actions. A parser can be thought of as the Grammar and syntax of the language (which defines how our commands will look like whats acceptable), we will use a command called yacc to compile the grammar, we will construct the grammar as a form of states which makes the grammar construction and deployment easier.

Below is our grammar definition:

1. q00: NEWLINE {return 0;} | cmd q1 q0 | error;
2. q0: NEWLINE {return 1;} | PIPE q00 {clrcont;};
3. q1: option q2 | option option q2 | arg_list q3 | io_modifier q4 | background q5 | io_descr q3 | /*empty*/ {InsertNode(); clrcont();};
4. q2: arg_list q3 | io_modifier q4 | io_descr q3 | background q5 | /*empty*/ {InsertNode(); clrcont();};
5. q3: io_modifier q4 | io_descr q3 | background q5 | /*empty*/ {InsertNode(); clrcont();};
6. q4: file q3 ;
7. cmd: WORD {cmad.cmd = yylval.str;};
8. arg_list: arg | arg arg_list;
9. arg: WORD {insertArgNode(yylval.str);};
10. file: WORD {io_red(yylval.str);};
11. io_modifier: IO {cmad.op=yylval.str;};
12. io_descr: IOR {cmad.op=yylval.str;};
13. option: OPTION {cmad.opt = yylval.str;} | OPTION2 {cmad.opt2 = yylval.str;};
14. background: AMPERSAND {bg = ‘1’;};
15. q5: /*empty*/{InsertNode(); clrcont();};



The above grammar specifies the different states of the parsing process,

The parser starts from state q00 and parses until reaching one of the states q5, q3, q1 which happens in a reverse manner because of the parsing technique in use (Bottom up parsing), the grammar reduces the tokens based on their location, Word can be reduced to cmd if it appears at the beginning, arg_list if it appears after a command, file if it appears after a redirection, then the sentence is parsed according to the grammar, starting from state q00 parser moves to state q1 by reading a cmd, at state q1 if the parser reads an option the sentence can have one of the following arguments, IO or background after, or have nothing, if the parser reads arguments the sentence can only have redirection after, if the parser reads an ampersand there should be nothing after.

Then the process starts again when the parser reads a pipe, this allows multiple simple commands to be connected by pipes to form a complex command.
We define a Simple command as any command which consists of a command, options, arguments and/or IO redirection.
Combining multiple simple commands using pipes results in a structure we call complex command.

The semantic actions associated with the grammar builds the parsing table and assigns the command values to the data structure which after building the command table its sent to the executor.
The command table consists of rows of simple commands and these rows are formed from a complex command connected by pipes, a simple Command entry which holds the command name to be executed, options which is the options to be executed with the command, arguments which holds the arguments that should be passed to the command, standard-in (stdIn) which specifies the location the command will get his input from by default it’s the terminal unless specified in the command otherwise, standard-out (stdOut) specifies the place the command will print the output of execution and by default it’s the terminal, standard-error (stdError) specifies the place the command will print the error messages of execution and by default it’s the terminal unless the user redirects it.

The grammar built allows the following syntax:

syntax

Which allows a command with options, arguments, IO redirection and to be a background process(&). a command with any of the previous is a simple command when we connect multiple simple commands we form a complex command.

While parsing the command our parser saves the command details in our table to be passed to the executor.

We picked a table to be our data structure, we need to store the following information about each command, the Command, Option, Option2, Arguments, StdIn, StdOut, StdError.

For example:

ls –al | sort -r

This command will result in the following table(each row is a simple command, The table itself is a complex command).

Table

3.Executor

After the command table has been built, the executor is responsible for creating a process for every command in the table and handle any redirection if needed.
The executor iterates over the table to execute every simple command and connect it to the next one at every entry in the table (simple command) the executor orders the command, option and arguments to be passed into the execvp function which replaces the current invoking process with the called one, the execvp function as a first parameter receives the name of the file to be executed and a null-terminated array containing the options (if any) followed by arguments.

But before the execvp is called the Executor handles the redirection in the shell, if the command is preceded by a command this means there’s a pipe before and thus the command’s input is set to be received from the previous pipe, then the command is checked for any input redirection which if exists overwrites the input from the previous pipe, if the command is not preceded by a command then there’s no piping (simple command), otherwise (more than one simple command) the output of the command is sent to the next command in the table, the command is then checked for output redirection if there’s input redirection the input from the assigned file overwrites the input from the previous command.

After the redirection has been handled the command is checked for the background flag which indicates if the shell should wait the command to finish execution or send the process to get executed in the background, now in order for the executor to execute the command it has to create an image of the shell to get executed, the executor forks the current process (shell) and executes the command on the child of this fork.

The executor starts by executing the first row, by setting the output of the command to standard output, then overwriting the output to the pipe in order to be received by the second command, after the first command (ls –al) executes the second command starts executing by assigning the input to be read from standard input at first, then because the command is preceded with another one the input is set to be received from the pipe, and since the command doesn’t contain any input redirection (from file) the standard input for the command will remain from the pipe, the standard output of the command will be to the screen, then the command is checked if it should send its output to the following command, in this case, this is the last command so the output will not be overwritten by piping, but since the command has an output redirection to a file the file will overwrite the standard output.

Executing the following command

ls –al | sort –r >file

The table that is built by the parser will look like this:

Table2