Key Takeaways
- Linux is everywhere that matters — servers, cloud infrastructure, containers, embedded systems. Linux fluency is non-negotiable for any technical role in 2026.
- Learn the 20 core commands first: ls, cd, pwd, cat, grep, find, chmod, chown, ps, kill, top, ssh, curl, systemctl, df, du, tar, cp, mv, rm.
- Pipe commands together. The power of Linux is composing simple commands into complex pipelines.
cat access.log | grep 404 | awk '{print $7}' | sort | uniq -c | sort -rnis a complete log analysis pipeline. - man pages are always there.
man commandgives you the complete reference for any command. Use them.
Navigation and File Operations
# Navigation pwd # print working directory ls -la # list all files with details cd /path/to/dir # change directory cd ~ # go to home directory cd - # go to previous directory # File operations cp file.txt /dest/ # copy file cp -r dir/ /dest/ # copy directory recursively mv file.txt newname.txt # move or rename rm file.txt # delete file rm -rf directory/ # delete directory recursively (careful!) mkdir -p path/to/new/dir # create directory (and parents) touch file.txt # create empty file / update timestamp ln -s /target /link # create symbolic link # Find files find /path -name "*.log" # find by name find /path -mtime -7 # modified in last 7 days find /path -size +100M # larger than 100MB find . -name "*.py" -exec grep -l "import" {} \;
Text Viewing and Processing
Learn the Core Concepts
Start with the fundamentals before touching tools. Understanding why something was built the way it was makes every tool decision faster and more defensible.
Build Something Real
The fastest way to learn is to build a project that produces a real output — something you can show, share, or deploy. Toy examples teach you the happy path; real projects teach you everything else.
Know the Trade-offs
Every technology choice is a trade-off. The engineers who advance fastest are the ones who can articulate clearly why they chose one approach over another — not just "I used it before."
Go to Production
Development is the easy part. The real learning happens when you deploy, monitor, debug, and scale. Plan for production from day one.
# View files cat file.txt # print entire file less file.txt # paginated viewer (q to quit) head -20 file.txt # first 20 lines tail -50 file.txt # last 50 lines tail -f /var/log/app.log # follow log in real time # Search grep "error" app.log # lines containing "error" grep -i "error" app.log # case insensitive grep -r "TODO" src/ # recursive directory search grep -n "pattern" file.txt # show line numbers grep -v "debug" app.log # lines NOT matching # Text processing awk '{print $1, $3}' file # print columns 1 and 3 awk -F',' '{print $2}' csv # comma-delimited, column 2 sed 's/old/new/g' file.txt # replace all occurrences sed -i 's/old/new/g' file # in-place edit cut -d':' -f1 /etc/passwd # cut field 1 by delimiter sort file.txt # sort lines alphabetically sort -n numbers.txt # sort numerically uniq -c sorted.txt # count unique lines wc -l file.txt # count lines
File Permissions and Ownership
# Permissions: rwxr-xr-- = owner:rwx, group:r-x, other:r-- # Numeric: r=4, w=2, x=1. 755 = rwxr-xr-x chmod 755 script.sh # set permissions numerically chmod +x script.sh # add execute for all chmod u+x,go-w file # user add execute, group/other remove write chmod -R 750 directory/ # recursive chmod chown user:group file.txt # change owner and group chown -R user:group dir/ # recursive chown ls -la # view permissions and ownership # Run as superuser sudo command # run command as root sudo -u username command # run as specific user su - username # switch to user
Process Management
# View processes ps aux # all processes with details ps aux | grep nginx # find specific process top # live process monitor htop # improved top (install separately) # Kill processes kill PID # send SIGTERM (graceful) kill -9 PID # send SIGKILL (force kill) killall nginx # kill all processes named nginx pkill -f "python script.py" # kill by command match # Background processes command & # run in background nohup command & # run ignoring hangup (survives logout) jobs # list background jobs fg %1 # bring job 1 to foreground bg %1 # resume job 1 in background
Networking Commands
# Connectivity ping -c 4 google.com # test connectivity (4 packets) traceroute google.com # trace route to host curl -v https://api.example.com # HTTP request with verbose output curl -o file.zip URL # download file wget URL # download file # Ports and connections ss -tlnp # listening TCP ports + process ss -tlnp | grep :8080 # check specific port lsof -i :3000 # process using port 3000 netstat -tlnp # older alternative to ss # DNS dig example.com # DNS lookup dig @8.8.8.8 example.com # use specific DNS server nslookup example.com # simple DNS lookup host example.com # quick DNS lookup # Network interfaces ip addr show # show IP addresses ip route show # show routing table ifconfig # older alternative (may need install)
Disk and Storage
# Disk usage df -h # disk space (human readable) du -sh /var/log/ # directory size (summary) du -sh * | sort -h # sizes of all items, sorted du -h --max-depth=1 / # top-level directory sizes # Archives tar -czvf archive.tar.gz dir/ # create gzipped archive tar -xzvf archive.tar.gz # extract gzipped archive tar -xzvf archive.tar.gz -C /dest # extract to specific dir zip -r archive.zip dir/ # create zip unzip archive.zip # extract zip
Service Management (systemctl)
systemctl status nginx # check service status systemctl start nginx # start service systemctl stop nginx # stop service systemctl restart nginx # restart service systemctl reload nginx # reload config without restart systemctl enable nginx # start on boot systemctl disable nginx # do not start on boot journalctl -u nginx # view service logs journalctl -u nginx -f # follow service logs journalctl -u nginx --since "1 hour ago"
SSH and Remote Access
ssh user@host # connect to remote host ssh -i key.pem user@host # connect with specific key ssh -p 2222 user@host # non-standard port ssh -L 8080:localhost:80 user@host # local port forwarding scp file.txt user@host:/path/ # copy file to remote scp user@host:/path/file.txt . # copy file from remote scp -r dir/ user@host:/path/ # copy directory to remote rsync -avz local/ user@host:/remote/ # sync directories rsync -avz --delete local/ user@host:/remote/ # sync with delete # SSH key generation ssh-keygen -t ed25519 -C "comment" # generate Ed25519 key ssh-copy-id user@host # copy public key to server
Pipes, Redirection, and Power Combos
# Pipes and redirection command1 | command2 # pipe output of cmd1 to cmd2 command > file.txt # redirect stdout to file (overwrite) command >> file.txt # redirect stdout to file (append) command 2>&1 | tee file.txt # stdout + stderr to file and screen command < input.txt # redirect stdin from file # Useful power combos # Find all Python files containing "TODO" find . -name "*.py" | xargs grep -l "TODO" # Count 404 errors by URL in nginx access log awk '$9 == 404 {print $7}' access.log | sort | uniq -c | sort -rn | head -20 # Find and kill a process by name ps aux | grep "node server" | grep -v grep | awk '{print $2}' | xargs kill # Show top 10 largest files in current directory tree find . -type f -exec du -sh {} \; | sort -rh | head -10 # Watch a command output refresh every 2 seconds watch -n 2 'ss -tlnp'
Linux fluency opens every door in tech. Build it properly.
The Precision AI Academy bootcamp covers Linux, systems administration, and AI development in two hands-on days. $1,490. October 2026. 40 seats per city.
Reserve Your SeatFrequently Asked Questions
What are the most important Linux commands to learn first?
Start with: ls, cd, pwd, mkdir, rm, cp, mv, cat, less, grep for files and navigation. Then ps, top, kill, systemctl for processes. Then chmod, chown for permissions. Then curl, ssh, ss, ping for networking. These 20 commands cover 80% of daily Linux work.
How do Linux file permissions work?
Three groups: owner, group, other. Each has read (r=4), write (w=2), execute (x=1). chmod 755 sets owner=rwx (7), group=r-x (5), other=r-x (5). The string -rwxr-xr-- shows this as text. chmod +x adds execute. chmod -R applies recursively.
What is the difference between grep, awk, and sed?
grep searches for lines matching a pattern and prints them. awk processes structured text field-by-field with pattern-action rules (computation, transformation). sed applies line-level text transformations (substitution, deletion). Simple search: grep. Field processing and computation: awk. In-place text replacement: sed.
How do you check what is using a port on Linux?
Use ss -tlnp to list all listening TCP ports with the process name and PID. For a specific port: ss -tlnp | grep :8080 or lsof -i :8080. These tell you exactly which process is occupying a port.
Linux is the foundation of cloud, security, and AI systems. Master it.
Two days of hands-on training. $1,490. Denver, NYC, Dallas, LA, Chicago. October 2026.
Reserve Your SeatNote: Some commands vary between Linux distributions (Ubuntu/Debian vs RHEL/CentOS/Fedora) and may require installation. Commands marked with modern alternatives (ss vs netstat, ip vs ifconfig) use the current standard tools.