tmp

class ScaleSort {
public:
    vector<int> sortElement(vector<int> A, int n, int k) {
        // write code here
        if(n < 2)
            return A;
        int tmp[k];
        for(int i=0;i<k;i++){
            tmp[i] = A[i];
        }
        //建立一个小根堆
        BuildHeap(tmp,k);
        //调整从第0个到第n-k个数,每次堆顶放在A中的相应位置
        for(int i=0;i<n-k;i++){
            A[i] = tmp[0];
            tmp[0] = A[i+k];
            ChangeHeap(tmp,k,0);
        }
        //调整最后的小根堆中的数
        for(int i=n-k;i<n;i++){
            //堆顶放入有序数组A中
            A[i] = tmp[0];
            if(i == n-1)
                break;
            //调整堆中最后一个数和第一个数            //每次都少一个数,所以k--
            tmp[0] = tmp[--k];
            ChangeHeap(tmp,k,0);
        }

        return A;
    }

    void swap(int* A,int low, int high){
        int temp = A[low];
        A[low] = A[high];
        A[high] = temp;
    }

    void BuildHeap(int* A, int n) {
        int i = n / 2 - 1;
        for (; i >= 0; --i)//从最后一个非叶子节点开始调整
            ChangeHeap(A, n, i);
    }

    void ChangeHeap(int* A, int size, int root) {
        int left = 2 * root + 1;//当前节点的左节点
        int right = 2 * root + 2;//当前节点的右节点
        int largei = root;//先令根节点为最大值的节点
        if (left < size && A[left] < A[largei])
            largei = left;//如果左子树大于根节点
        if (right < size && A[right] < A[largei])
            largei = right;//右子树大于根节点
        if (largei != root) {
            swap(A, root, largei);//交换根节点和左右子树中最大的节点
            ChangeHeap(A, size, largei);//交换完之后,如果破坏了下边的堆结构,需要递归调整
        }
    }

};
时间: 2024-10-10 10:16:15

tmp的相关文章

linux 开机自动清空/tmp目录是怎么回事

That depends on your distribution. On some system, it's deleted only when booted, others have cronjobs running deleting items older than n hours. On Debian-like systems: on boot (the rules are defined in /etc/default/rcS). On RedHat-like systems: by

Cannot lock storage /tmp/hadoop-root/dfs/name. The directory is already locked.

[[email protected] bin]# ./hadoop namenode -format 12/05/21 06:13:51 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:   host = nn01/127.0.0.1 STARTUP_MSG:  

linux 查找当前目录下超过100M的文件并移动到tmp

命令如下: find . -type f -size +100M -exec mv {} /tmp/ \; 说明: -type:文件类型,后面跟的f 指文件,如果是目录为d -size:文件大小,+100M指超过100M,-为小于,没有符号则为等于 -exec:管道命令,将前面的查询结果传递给后面的命令 {}:指前面传递过来的的查询结果 \;:和前面的exec配合使用,不写会提示:find: missing argument to `-exec' 如果只想将结果查询出来,只需要使用-exec前半

vnc 连接不上,出现Warning: zhouziqi:1 is taken because of /tmp/.X11-unix/X1

楼主不知道怎么回事,突然就边不上VNC了,我就打算重新启动服务,发现服务启不来,我用 journalctl -xe命令出现下面的东西: Apr 24 21:30:24 zhouziqi runuser[23390]: pam_unix(runuser-l:session): session opened for user root by (uid=0) Apr 24 21:30:24 zhouziqi runuser[23390]: Warning: zhouziqi:1 is taken be

解决qt程序运行时的cannot create Qt for Embedded Linux data directory: /tmp/qtembedded-0

方法1: 1.mkdir /tmp 2.挂载 mount -t tmpfs -o size=32m none /tmp 方法2: 上面的user 0h说明你是以root用户的身份运行.可以尝试切换一下用户重新运行试试 方法3: 把/tmp下的数据文件qtembedded-0删除在运行.

[tmp] [email&#160;protected]所有人插件

<div style="padding:3px;text-align:right;"> <a style="background:green;color:white;padding:2px;" href="#" onclick="return atAll('at');">@本页</a>     <a style="background:green;color:white;pa

MAC中Django中runserver提示Can&#39;t connect to local MySQL server through socket &#39;/tmp/mysql.sock错误

好像不止遇到一次,直接Google就可以了,在stackoverflow中就有答案,答案就是你没有开MySQL - -. stackoverflow链接见 http://stackoverflow.com/questions/16325607/cant-connect-to-local-mysql-server-through-socket-tmp-mysql-sock 开启MySQL的命令如下: mysql.server start MAC中Django中runserver提示Can't co

Can&#39;t connect to local MySQL server through socket &#39;/tmp/mysql.sock&#39;

启动mysql之后,不能进入mysql命令行: Can't connect to local MySQL server through socket '/tmp/mysql.sock': 修改/etc/my.cnf 当中 socket=/tmp/mysql.sock 重启mysql,即可 Can't connect to local MySQL server through socket '/tmp/mysql.sock'

hive 报错/tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x

启动hive时报如下错误:/tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x 这是/tmp/hive目录权限不够,需要提升权限 操作如下命令即可解决问题: hadoop fs -chmod -R 777 /tmp 版权声明:本文为博主原创文章,未经博主允许不得转载.