这是我摘取的一段英文资料,我觉得学习算法之前,对各种排序得有个大致的了解:
Sorting algorithms are an important part of managing data. At Cprogramming.com, we offer tutorials for understanding the
most important and common
sorting techniques. Each algorithm has particular strengths and weaknesses and in many cases the best thing to do is just use the built-in sorting function qsort. For times when this isn‘t an
option or you just need a quick and dirty sorting algorithm, there are a variety of choices.
Most sorting algorithms work by comparing the data being sorted. In some cases, it may be desirable to sort a large chunk of data (for instance, a struct containing a name and address) based on only
a portion of that data. The piece of data actually used to determine the sorted order is called the key.
Sorting algorithms are usually judged by their efficiency. In this case, efficiency refers to the algorithmic efficiency as the size of the input grows large and is generally based on the number of
elements to sort. Most of the algorithms in use have an algorithmic efficiency of either O(n^2) or O(n*log(n)). A few special case algorithms (one example is mentioned in Programming
Pearls) can sort certain data sets faster than O(n*log(n)). These algorithms are not based on comparing the items being sorted and rely on tricks. It has been shown that no key-comparison algorithm
can perform better than O(n*log(n)).
Many algorithms that have the same efficiency do not have the same speed on the same input. First, algorithms must be judged based on their average case, best case, and worst case efficiency. Some
algorithms, such as quick sort, perform exceptionally well for some inputs, but horribly for others. Other algorithms, such as merge sort, are unaffected by the order of input data. Even a modified version of bubble sort can finish in O(n) for the most favorable
inputs.
A second factor is the "constant term". As big-O notation abstracts away many of the details of a process, it is quite useful for looking at the big picture. But one thing that gets dropped out is
the constant in front of the expression: for instance, O(c*n) is just O(n). In the real world, the constant, c, will vary across different algorithms. A well-implemented quicksort should have a much smaller constant multiplier than heap sort.
A second criterion for judging algorithms is their space requirement -- do they require scratch space or can the array be sorted in place (without additional memory beyond a few variables)? Some algorithms
never require extra space, whereas some are most easily understood when implemented with extra space (heap sort, for instance, can be done in place, but conceptually it is much easier to think of a separate heap). Space requirements may even depend on the
data structure used (merge sort on arrays versus merge sort on linked lists, for instance).
A third criterion is stability -- does the sort preserve the order of keys with equal values? Most simple sorts do just this, but some sorts, such as heap sort, do not.
The following chart compares sorting algorithms on the various criteria outlined above; the algorithms with higher constant terms appear first, though this is clearly an implementation-dependent concept
and should only be taken as a rough guide when picking between sorts of the same big-O efficiency.
Time | ||||||
---|---|---|---|---|---|---|
Sort | Average | Best | Worst | Space | Stability | Remarks |
Bubble sort | O(n^2) | O(n^2) | O(n^2) | Constant | Stable | Always use a modified bubble sort |
Modified Bubble sort | O(n^2) | O(n) | O(n^2) | Constant | Stable | Stops after reaching a sorted array |
Selection Sort | O(n^2) | O(n^2) | O(n^2) | Constant | Stable | Even a perfectly sorted input requires scanning the entire array |
Insertion Sort | O(n^2) | O(n) | O(n^2) | Constant | Stable | In the best case (already sorted), every insert requires constant time |
Heap Sort | O(n*log(n)) | O(n*log(n)) | O(n*log(n)) | Constant | Instable | By using input array as storage for the heap, it is possible to achieve constant space |
Merge Sort | O(n*log(n)) | O(n*log(n)) | O(n*log(n)) | Depends | Stable | On arrays, merge sort requires O(n) space; on linked lists, merge sort requires constant space |
Quicksort | O(n*log(n)) | O(n*log(n)) | O(n^2) | Constant | Stable | Randomly picking a pivot value (or shuffling the array prior to sorting) can help avoid worst case scenarios such as a perfectly sorted array. |
视觉直观感受各种算法:http://blog.jobbole.com/11745/
QuickSort(快速):
@快速排序的思路是(以首元素为基准)将数组切分成两块,左边的元素都小于基准元素(A区域),右边的元素都大于基准元素(H区域),此时基准元素的位置已经确定了(仔细想想),请查看思路一部分;
这时我们得用递归的思想继续后面的排序:
@既然基准元素的位置已经确定,那么我们用来比较的元素的两个游标i,j就要[i--,j++](因为基准元素的位置是当
i = j才确定的,如有不懂请看一下后面的代码部分);
@现在开始排列第一次分离出来比基准元素大的部分(H区域),(将这一部分看做一个新的数组,那么头元素的下标就是j++);只有当(H区域)的元素都排列好了之后,才会去排列(A区域),【递归的思想------想一下】
思路一:
这是数组中的最初顺序:
(i = 0, j = 8)
66 |
55 |
88 |
11 |
44 |
22 |
99 |
33 |
77 |
选择头元素66(下标i = 0)作为比较的基准,与尾元素77(下标j = 8)作比较:
@如果比77大就互换位置,i++;(下一次与55(下标i
= 1)作比较了);@ 否则不换位置( j--【注意咯】),那么下一次与33(下标 j=
7)作比较了,就是我们这种情况);【记住啦:产生交换就j--;否则i++】
(i = 0, j = 7)
33 |
55 |
88 |
11 |
44 |
22 |
99 |
66 |
77 |
(i = 1, j = 7)
33 |
55 |
88 |
11 |
44 |
22 |
99 |
66 |
77 |
(i = 2, j = 7)
33 |
55 |
66 |
11 |
44 |
22 |
99 |
88 | 77 |
(i = 2, j = 6)
33 |
55 |
66 |
11 |
44 |
22 |
99 |
88 |
77 |
(i = 3, j = 5)
33 |
55 |
22 |
11 |
44 |
66 |
99 |
88 |
77 |
(i = 4, j = 5)
33 |
55 |
22 |
11 |
44 |
66 |
99 |
88 |
77 |
(i = 4, j = 4)
33 |
55 |
22 |
11 |
44 |
66 |
99 |
88 |
77 |
@以数组的头元素作为基准,当第一轮比较完之后如 1 所示:
@大于66的元素在右边,小于66的元素在左边【验证了前面的----66位置已确定!后面再也没有改动过了吧】
66
55 88 11 44 22 99 33 77 最初位置
33 55 22 11 44 66 99 88 77
1
33 55 22 11 44 66 77 88 99
2
33 55 22 11 44 66 77 88 99
3
11 22 33 55 44 66 77 88 99
4
11 22 33 44 55 66 77 88 99
5
11 22 33 44 55 66 77 88 99
6
// 快速排序 private static void quickSort(int[] array, int start, int end) { if (start >= end) { return; } int i = start; int j = end; boolean isrun = true; // 思路一的逻辑代码 while (i != j) { if (array[i] > array[j]) { swap(array, i, j); isrun = (isrun == true) ? false : true; } if (isrun) { j--; } else { i++; } time++; } print(array); i--; j++; quickSort(array, j, end); quickSort(array, start, i); }
Binary Search(二分):
前言:二分法排序的基本思想是 (二分查找的递归):【从小到大排序】
@获取数组array的大小len,定义游标i
= 0,其中下标i对应的元素是待插入的目标元素,每一轮过后,下标i前面的元素将会顺序排列;
【在每一轮的排序过程中】
@记录数组中首元素和尾元素的下标left
= 0 和 right = i - 1(为什么要减1,因为i是我们待插入的元素下标),每次采用折中的办法找到所对应元素的下标mid;
@如果待插入的元素array[i]
< array[mid]
@那么我们可以确定所要插入的区域在左边,因此左区域的最大范围right
= mid - 1;(>同理)。
@通过不断的折中,当(left
> right)时就能确定插入的位置,【记住:下标i之前的元素都已经排好序的,故只要整体往后挪动一个单位即可】。
66 |
55 |
88 |
11 |
44 |
22 |
66 |
55 |
88 |
11 |
44 |
22 |
55 |
66 |
88 |
11 |
44 |
22 |
55 |
66 |
88 |
11 |
44 |
22 |
11 |
55 |
66 |
88 |
44 |
22 |
11 |
44 |
55 |
66 |
88 |
22 |
11 |
22 |
44 |
55 |
66 |
88 |
// 二分法排序 private static void binaryCheck(int[] array) { for (int i = 0; i < array.length; i++) { int key = array[i]; int left = 0; int right = i - 1; int mid = 0; // 每一轮查找的逻辑代码 while (left <= right) { mid = (right + left) / 2; if (key < array[mid]) { right = mid - 1; } else { left = mid + 1; } time++; } // 区间[left,i-1]整体往后挪动一个单位 for (int j = i - 1; j >= left; j--) { array[j + 1] = array[j]; } // 将目标元素插入指定位置 if (left != i) { array[left] = key; } print(test); } }
ShellSort(希尔):
请看下面的每一轮过后数组的排序信息:
66 11 99 55 22
77 --------原始序列
55 11 99 66 22
77 1
55 11 99 66 22 77
------------------------------------------------------
55 11 77 66 22
99
11 55 77 66 22
99
11 55 77 66 22
99 2
11 55 66 77 22
99 ----------EXP
11 22 55 66 77
99
11 22 55 66 77 99
有没有找出一点规律哦~~~(再找找看)
构建代码:
@1中的数组元素比较是(下标i和i+d)
[d = array.length/2];
@2中的数组元素比较是(下标i和i+d)
[d = (array.length/2)/2];
@[d
= array.length/2];当d<0时,排序结束;
--------------------------------------------------------------------
@当我们比较EXP时会发现元素77(下标i
= 3)和22(下标i
= 4)后,22的下标(i = 1,而不是 i = 3,这样的替换),
那么在这里我们需要进行逻辑控制,需要遍历该下标之前的元素;(红色部分)
while(j >= 0 && array[j + d] < array[j]){
temp = array[j];
array[j] = array[j + d];
array[j + d] = temp;
j = j - d;
}
// 希尔排序 private static void shellSort(int[] array, int len) { int j; int d = len / 2; int temp = 0; while (d > 0) { for (int i = d; i < len; i++) { j = i - d; while (j >= 0 && array[j + d] < array[j]) { temp = array[j]; array[j] = array[j + d]; array[j + d] = temp; j = j - d; time++; } print(test); } d = d / 2; } }
MergeSort(归并):
数组{33,55,99,11,66,22,77}的归并过程:
{33,55},{11,99},{22,66},{77}
--------3;
{11,33,55,99},{22,66,77}------------4(当22与11比较完后,66就直接从33开始比较至99结束);
{11,22,33,55,66,77,99}--------------5;
总共比较了12 = 3 +
4 + 5次;
// 归并排序 private static void myMerge(int[] array,int first,int sStart,int sEnd){ int[] temp = new int[sEnd - first + 1]; int i = first,j = sStart,k = 0; while(i < sStart && j <= sEnd){ //因为每一轮归并后,产生的数组中元素将会顺序排列,若A中的第一个元素大于B中的 //最后一个元素,那么A中的其他元素无需比较,直接转移 if(array[i] <= array[j]){ temp[k] = array[i]; i++; k++; }else{ temp[k] = array[j]; k++; j++; } } while(i < sStart){ temp[k] = array[i]; k++; i++; } while(j <= sEnd){ temp[k] = array[j]; k++; j++; } System.arraycopy(temp, 0, array, first, temp.length); } private static void mySort(int[] array,int start,int len){ int size = 0; //计算归并的第一组元素的起始位置 int length = array.length; int mtime = length/(2*len); //归并的数组个数 int rest = length & (2*len - 1); //统计归并的最后一个数组(当数组元素的个数为奇数时,那么在归并的过程中最后一个数组元素个数是奇数) // 如果在归并过程中数组的个数刚好是偶数那么rest = 0; if(mtime == 0){ return; } for(int i = 0;i < mtime;i ++){ size = 2*i*len; myMerge(array, size, size + len, size + 2*len - 1); } if(rest != 0){ myMerge(array, length - rest - 2*len, length - rest, length - 1); } //下一轮归并 mySort(array, 0, 2*len); }