markdown-it
demo
Delete
Submit
clear
permalink
### [sort](https://blog.gtwang.org/linux/linux-sort-command-tutorial-and-examples/) ```bash #!/bin/bash rm /tmp/some.txt echo "TrueOS,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Mint,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Debian,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Solus,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Ubuntu,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Antergos,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "elementary,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Manjaro,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "openSUSE,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt echo "Fedora,$(( $RANDOM % 10 + 1 )),$(( $RANDOM % 100 ))" >> /tmp/some.txt sort /tmp/some.txt echo "--------------------------------------------------" cat /tmp/some.txt | sort echo "反向排序" sort -r /tmp/some.txt echo "根據各行特定欄位排序" echo "-t, 之後串接「分隔符號」" echo "-ka,b a表示起始位置,b表示結束位置" sort -t, -k2,2 -n /tmp/some.txt echo "--------------------------------------------------" sort -t, -k2 /tmp/some.txt ``` ### [uniq 移除重覆內容](https://blog.gtwang.org/linux/linux-uniq-command-tutorial/) serial.txt 檔案內容 ``` 10 => wAwbwCM8x7 11 => wAwbwCM8x7 12 => iwFSRivUPlVI 13 => wAwbwCM8x7 14 => iwFSRivUPlVI 14 => iwFSRivUPlVI ``` ```bash uniq -s 2 serial.txt # 略過前兩個字元 10 => wAwbwCM8x7 12 => iwFSRivUPlVI 13 => wAwbwCM8x7 14 => iwFSRivUPlVI ``` serial.txt 檔案內容 ``` wAwbwCM8x7 wAwbwCM8x7 iwFSRivUPlVI wAwbwCM8x7 iwFSRivUPlVI iwFSRivUPlVI ``` ```bash uniq serial.txt #刪除重覆行 wAwbwCM8x7 iwFSRivUPlVI wAwbwCM8x7 iwFSRivUPlVI sort serial.txt | uniq #排序後刪除重覆行 iwFSRivUPlVI wAwbwCM8x7 ``` ### [How to sort lines of text files in Linux?](https://www.tutorialspoint.com/how-to-sort-lines-of-text-files-in-linux) ```bash sort text.txt > newtext.txt ``` --- # 找出重複行 ## ✅ 常見作法 ### 1. 直接找出重複行 ```bash sort file.txt | uniq -d ``` * `sort`:先排序(uniq 只會檢查相鄰行) * `uniq -d`:只顯示重複的行(只輸出一次,不重複列出) --- ### 2. 找出每行出現次數 ```bash sort file.txt | uniq -c ``` 輸出範例: ``` 1 apple 3 banana 2 cherry ``` 👉 前面的數字就是出現次數。 --- ### 3. 只列出出現超過 1 次的行(含次數) ```bash sort file.txt | uniq -c | awk '$1 > 1' ``` 結果: ``` 3 banana 2 cherry ``` --- ### 4. 只列出唯一的行(沒有重複的) ```bash sort file.txt | uniq -u ``` --- ## 🔑 總結 * `sort file | uniq -d` → 找重複行 * `sort file | uniq -c` → 計數 * `sort file | uniq -u` → 找唯一行 ---
html
source
debug
Fork me on GitHub