Tuesday, December 9, 2025
HomeArtificial IntelligenceStatistics on the Command Line for Newbie Knowledge Scientists

Statistics on the Command Line for Newbie Knowledge Scientists

Statistics on the Command Line for Newbie Knowledge ScientistsStatistics on the Command Line for Newbie Knowledge Scientists
Picture by Editor

 

Introduction

 
If you’re simply beginning your information science journey, you may assume you want instruments like Python, R, or different software program to run statistical evaluation on information. Nevertheless, the command line is already a robust statistical toolkit.

Command line instruments can typically course of massive datasets quicker than loading them into memory-heavy functions. They’re straightforward to script and automate. Moreover, these instruments work on any Unix system with out putting in something.

On this article, you’ll learn to carry out important statistical operations instantly out of your terminal utilizing solely built-in Unix instruments.

🔗 Right here is the Bash script on GitHub. Coding alongside is extremely beneficial to grasp the ideas totally.

To observe this tutorial, you will have:

  • You have to a Unix-like setting (Linux, macOS, or Home windows with WSL).
  • We’ll use solely normal Unix instruments which are already put in.

Open your terminal to start.

 

Setting Up Pattern Knowledge

 
Earlier than we will analyze information, we want a dataset. Create a easy CSV file representing each day web site visitors by operating the next command in your terminal:

cat > visitors.csv << EOF
date,guests,page_views,bounce_rate
2024-01-01,1250,4500,45.2
2024-01-02,1180,4200,47.1
2024-01-03,1520,5800,42.3
2024-01-04,1430,5200,43.8
2024-01-05,980,3400,51.2
2024-01-06,1100,3900,48.5
2024-01-07,1680,6100,40.1
2024-01-08,1550,5600,41.9
2024-01-09,1420,5100,44.2
2024-01-10,1290,4700,46.3
EOF

 

This creates a brand new file known as visitors.csv with headers and ten rows of pattern information.

 

Exploring Your Knowledge

 

// Counting Rows in Your Dataset

One of many first issues to establish in a dataset is the variety of data it comprises. The wc (phrase rely) command with the -l flag counts the variety of strains in a file:

 

The output shows: 11 visitors.csv (11 strains whole, minus 1 header = 10 information rows).

 

// Viewing Your Knowledge

Earlier than shifting on to calculations, it’s useful to confirm the information construction. The head command shows the primary few strains of a file:

 

This exhibits the primary 5 strains, permitting you to preview the information.

date,guests,page_views,bounce_rate
2024-01-01,1250,4500,45.2
2024-01-02,1180,4200,47.1
2024-01-03,1520,5800,42.3
2024-01-04,1430,5200,43.8

 

// Extracting a Single Column

To work with particular columns in a CSV file, use the lower command with a delimiter and area quantity. The next command extracts the guests column:

lower -d',' -f2 visitors.csv | tail -n +2

 

This extracts area 2 (guests column) utilizing lower, and tail -n +2 skips the header row.

 

Calculating Measures of Central Tendency

 

// Discovering the Imply (Common)

The imply is the sum of all values divided by the variety of values. We will calculate this by extracting the goal column, then utilizing awk to build up values:

lower -d',' -f2 visitors.csv | tail -n +2 | awk '{sum+=$1; rely++} END {print "Imply:", sum/rely}'

 

The awk command accumulates the sum and rely because it processes every line, then divides them within the END block.

 

Subsequent, we calculate the median and the mode.

 

// Discovering the Median

The median is the center worth when the dataset is sorted. For a fair variety of values, it’s the common of the 2 center values. First, type the information, then discover the center:

lower -d',' -f2 visitors.csv | tail -n +2 | type -n | awk '{arr[NR]=$1; rely=NR} END {if(countpercent2==1) print "Median:", arr[(count+1)/2]; else print "Median:", (arr[count/2]+arr[count/2+1])/2}'

 

This types the information numerically with type -n, shops values in an array, then finds the center worth (or the common of the 2 center values if the rely is even).

 

// Discovering the Mode

The mode is probably the most often occurring worth. We discover this by sorting, counting duplicates, and figuring out which worth seems most frequently:

lower -d',' -f2 visitors.csv | tail -n +2 | type -n | uniq -c | type -rn | head -n 1 | awk '{print "Mode:", $2, "(seems", $1, "occasions)"}'

 

This types values, counts duplicates with uniq -c, types by frequency in reverse order, and selects the highest end result.

 

Calculating Measures of Dispersion (or Unfold)

 

// Discovering the Most Worth

To seek out the most important worth in your dataset, we examine every worth and observe the utmost:

awk -F',' 'NR>1 {if($2>max) max=$2} END {print "Most:", max}' visitors.csv

 

This skips the header with NR>1, compares every worth to the present max, and updates it when discovering a bigger worth.

 

// Discovering the Minimal Worth

Equally, to search out the smallest worth, initialize a minimal from the primary information row and replace it when smaller values are discovered:

awk -F',' 'NR==2 {min=$2} NR>2 {if($2

 

Run the above instructions to retrieve the utmost and minimal values.

 

// Discovering Each Min and Max

Moderately than operating two separate instructions, we will discover each the minimal and most in a single cross:

awk -F',' 'NR==2 {min=$2; max=$2} NR>2 {if($2max) max=$2} END {print "Min:", min, "Max:", max}' visitors.csv

 

This single-pass strategy initializes each variables from the primary row, then updates every independently.

 

// Calculating (Inhabitants) Commonplace Deviation

Commonplace deviation measures how unfold out values are from the imply. For a whole inhabitants, use this system:

awk -F',' 'NR>1 {sum+=$2; sumsq+=$2*$2; rely++} END {imply=sum/rely; print "Std Dev:", sqrt((sumsq/rely)-(imply*imply))}' visitors.csv

 

This accumulates the sum and sum of squares, then applies the system: ( sqrt{frac{sum x^2}{N} – mu^2} ), yielding the output:

 

// Calculating Pattern Commonplace Deviation

When working with a pattern relatively than a whole inhabitants, use Bessel’s correction (dividing by ( n-1 )) for unbiased pattern estimates:

awk -F',' 'NR>1 {sum+=$2; sumsq+=$2*$2; rely++} END {imply=sum/rely; print "Pattern Std Dev:", sqrt((sumsq-(sum*sum/rely))/(count-1))}' visitors.csv

 

This yields:

 

// Calculating Variance

Variance is the sq. of the usual deviation. It’s one other measure of unfold helpful in lots of statistical calculations:

awk -F',' 'NR>1 {sum+=$2; sumsq+=$2*$2; rely++} END {imply=sum/rely; var=(sumsq/rely)-(imply*imply); print "Variance:", var}' visitors.csv

 

This calculation mirrors the usual deviation however omits the sq. root.

 

Calculating Percentiles

 

// Calculating Quartiles

Quartiles divide sorted information into 4 equal components. They’re particularly helpful for understanding information distribution:

lower -d',' -f2 visitors.csv | tail -n +2 | type -n | awk '
{arr[NR]=$1; rely=NR}
END {
  q1_pos = (rely+1)/4
  q2_pos = (rely+1)/2
  q3_pos = 3*(rely+1)/4
  print "Q1 (twenty fifth percentile):", arr[int(q1_pos)]
  print "Q2 (Median):", (countpercent2==1) ? arr[int(q2_pos)] : (arr[count/2]+arr[count/2+1])/2
  print "Q3 (seventy fifth percentile):", arr[int(q3_pos)]
}'

 

This script shops sorted values in an array, calculates quartile positions utilizing the ( (n+1)/4 ) system, and extracts values at these positions. The code outputs:

Q1 (twenty fifth percentile): 1100
Q2 (Median): 1355
Q3 (seventy fifth percentile): 1520

 

// Calculating Any Percentile

You possibly can calculate any percentile by adjusting the place calculation. The next versatile strategy makes use of linear interpolation:

PERCENTILE=90
lower -d',' -f2 visitors.csv | tail -n +2 | type -n | awk -v p=$PERCENTILE '
{arr[NR]=$1; rely=NR}
END {
  pos = (rely+1) * p/100
  idx = int(pos)
  frac = pos - idx
  if(idx >= rely) print p "th percentile:", arr[count]
  else print p "th percentile:", arr[idx] + frac * (arr[idx+1] - arr[idx])
}'

 

This calculates the place as ( (n+1) occasions (percentile/100) ), then makes use of linear interpolation between array indices for fractional positions.

 

Working with A number of Columns

 
Typically, you’ll want to calculate statistics throughout a number of columns directly. Right here is methods to compute averages for guests, web page views, and bounce fee concurrently:

awk -F',' '
NR>1 {
  v_sum += $2
  pv_sum += $3
  br_sum += $4
  rely++
}
END {
  print "Common guests:", v_sum/rely
  print "Common web page views:", pv_sum/rely
  print "Common bounce fee:", br_sum/rely
}' visitors.csv

 

This maintains separate accumulators for every column and shares the identical rely throughout all three, giving the next output:

Common guests: 1340
Common web page views: 4850
Common bounce fee: 45.06

 

// Calculating Correlation

Correlation measures the connection between two variables. The Pearson correlation coefficient ranges from -1 (good adverse correlation) to 1 (good optimistic correlation):

awk -F', *' '
NR>1 {
  x[NR-1] = $2
  y[NR-1] = $3

  sum_x += $2
  sum_y += $3

  rely++
}
END {
  if (rely < 2) exit

  mean_x = sum_x / rely
  mean_y = sum_y / rely

  for (i = 1; i <= rely; i++) {
    dx = x[i] - mean_x
    dy = y[i] - mean_y

    cov   += dx * dy
    var_x += dx * dx
    var_y += dy * dy
  }

  sd_x = sqrt(var_x / rely)
  sd_y = sqrt(var_y / rely)

  correlation = (cov / rely) / (sd_x * sd_y)

  print "Correlation:", correlation
}' visitors.csv

 

This calculates Pearson correlation by dividing covariance by the product of the usual deviations.

 

Conclusion

 
The command line is a robust instrument for statistical evaluation. You possibly can course of volumes of information, calculate advanced statistics, and automate experiences — all with out putting in something past what’s already in your system.

These abilities complement your Python and R information relatively than changing them. Use command-line instruments for fast exploration and information validation, then transfer to specialised instruments for advanced modeling and visualization when wanted.

One of the best half is that these instruments can be found on just about each system you’ll use in your information science profession. Open your terminal and begin exploring your information.
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embody DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and occasional! At present, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments