ExtJS远程数据-本地分页

背景

一般情况下,分页展示是前端只负责展示,后台通过SQL语句实现分页查询。当总数据量在千条以下,适合一次性查询出符合条件的所有数据,让前端页面负责分页也是一种选择。

实例

现通过ExtJS 4扩展类库Ext.ux.data.PagingStore来实现分页,建议使用前在GitHub获取最新版本。

使用时非常简单,只需将Store的继承类改为“Ext.ux.data.PagingStore”,其他分页配置可参照之前的文章《ExtJS实现分页grid paging》。

Ext.define(‘XXX‘, {
 extend : ‘Ext.ux.data.PagingStore‘
 ...
})

  

但是,针对不同的应用场景还有2个疑问:

  • 本地分页后,如何强制重新查询后台数据?

    根据PagingStore的实现来看,只有查询参数修改后才会再次调用后台进行查询。但是,如果我们修改了列表中某条数据后,需要按当前条件刷新列表。这时,我是在条件中添加一个时间戳来进行刷新的。

    

store.getProxy().extraParams._TIME=new Date().getTime();
  • 分页按钮中的“刷新”如何去除?

因为是本地分页,ExtJS自带的分页“刷新”按钮似乎成了摆设。可以在页面加载完成后,将其隐藏。在Controller层添加afterrender事件来实现。代码中的tab_id可通过开发人员工具在ExtJS生成的页面源码中看到,这里是抛砖引玉,希望大家写出更好的选择器。

afterrender : function(){
       Ext.get("tab_id").down(".x-tbar-loading").up(".x-btn").setVisible(false);
}

附上Ext.ux.data.PagingStore.js的源码:

/*
* PagingStore for Ext 4 - v0.6
* Based on Ext.ux.data.PagingStore for Ext JS 3, by Condor, found at
* http://www.sencha.com/forum/showthread.php?71532-Ext.ux.data.PagingStore-v0.5
* Stores are configured as normal, with whatever proxy you need for remote or local.  Set the
* lastOptions when defining the store to set start, limit and current page.  Store should only
* request new data if params or extraParams changes.  In Ext JS 4, start, limit and page are part of the
* options but no longer part of params.
* Example remote store:
*     var myStore = Ext.create(‘Ext.ux.data.PagingStore‘, {
            model: ‘Artist‘,
            pageSize: 3,
            lastOptions: {start: 0, limit: 3, page: 1},
            proxy: {
              type: ‘ajax‘,
              url: ‘url/goes/here‘,
              reader: {
                type: ‘json‘,
                root: ‘rows‘
              }
            }
      });
* Example local store:
*    var myStore = Ext.create(‘Ext.ux.data.PagingStore‘, {
           model: ‘Artist‘,
           pageSize: 3,
           proxy: {
             type: ‘memory‘,
             reader: {
               type: ‘array‘
             }
           },
           data: data
     });
* To force a reload, delete store.lastParams.
*/
Ext.define(‘Ext.ux.data.PagingStore‘, {
extend: ‘Ext.data.Store‘,
alias: ‘store.pagingstore‘,
destroyStore: function () {
this.callParent(arguments);
this.allData = null;
},
/**
* Currently, only looking at start, limit, page and params properties of options.  Ignore everything
* else.
* @param {Ext.data.Operation} options
* @return {boolean}
*/
isPaging: function (options) {
var me = this,
start = options.start,
limit = options.limit,
page = options.page,
currentParams;
if ((typeof start != ‘number‘) || (typeof limit != ‘number‘)) {
delete me.start;
delete me.limit;
delete me.page;
me.lastParams = options.params;
return false;
}
me.start = start;
me.limit = limit;
me.currentPage = page;
var lastParams = this.lastParams;
currentParams = Ext.apply({}, options.params, this.proxy ? this.proxy.extraParams : {});
me.lastParams = currentParams;
if (!this.proxy) {
return true;
}
// No params from a previous load, must be the first load
if (!lastParams) {
return false;
}
//Iterate through all of the current parameters, if there are differences, then this is
//not just a paging request, but instead a true load request
for (var param in currentParams) {
if (currentParams.hasOwnProperty(param) && (currentParams[param] !== lastParams[param])) {
return false;
}
}
//Do the same iteration, but this time walking through the lastParams
for (param in lastParams) {
if (lastParams.hasOwnProperty(param) && (currentParams[param] !== lastParams[param])) {
return false;
}
}
return true;
},
applyPaging: function () {
var me = this,
start = me.start,
limit = me.limit,
allData, data;
if ((typeof start == ‘number‘) && (typeof limit == ‘number‘)) {
allData = this.data;
data = new Ext.util.MixedCollection(allData.allowFunctions, allData.getKey);
data.addAll(allData.items.slice(start, start + limit));
me.allData = allData;
me.data = data;
}
},
loadRecords: function (records, options) {
var me = this,
i = 0,
length = records.length,
start,
addRecords,
snapshot = me.snapshot,
allData = me.allData;
if (options) {
start = options.start;
addRecords = options.addRecords;
}
if (!addRecords) {
delete me.allData;
delete me.snapshot;
me.clearData(true);
} else if (allData) {
allData.addAll(records);
} else if (snapshot) {
snapshot.addAll(records);
}
me.data.addAll(records);
if (!me.allData) {
me.applyPaging();
}
if (start !== undefined) {
for (; i < length; i++) {
records[i].index = start + i;
records[i].join(me);
}
} else {
for (; i < length; i++) {
records[i].join(me);
}
}
/*
* this rather inelegant suspension and resumption of events is required because both the filter and sort functions
* fire an additional datachanged event, which is not wanted. Ideally we would do this a different way. The first
* datachanged event is fired by the call to this.add, above.
*/
me.suspendEvents();
if (me.filterOnLoad && !me.remoteFilter) {
me.filter();
}
if (me.sortOnLoad && !me.remoteSort) {
me.sort(undefined, undefined, undefined, true);
}
me.resumeEvents();
me.fireEvent(‘datachanged‘, me);
me.fireEvent(‘refresh‘, me);
},
loadData: function (data, append) {
var me = this,
model = me.model,
length = data.length,
newData = [],
i,
record;
me.isPaging(Ext.apply({}, this.lastOptions ? this.lastOptions : {}));
//make sure each data element is an Ext.data.Model instance
for (i = 0; i < length; i++) {
record = data[i];
if (!(record.isModel)) {
record = Ext.ModelManager.create(record, model);
}
newData.push(record);
}
me.loadRecords(newData, append ? me.addRecordsOptions : undefined);
},
loadRawData: function (data, append) {
var me = this,
result = me.proxy.reader.read(data),
records = result.records;
if (result.success) {
me.totalCount = result.total;
me.isPaging(Ext.apply({}, this.lastOptions ? this.lastOptions : {}));
me.loadRecords(records, append ? me.addRecordsOptions : undefined);
me.fireEvent(‘load‘, me, records, true);
}
},
load: function (options) {
var me = this,
pagingOptions;
options = options || {};
if (typeof options == ‘function‘) {
options = {
callback: options
};
}
options.groupers = options.groupers || me.groupers.items;
options.page = options.page || me.currentPage;
options.start = (options.start !== undefined) ? options.start : (options.page - 1) * me.pageSize;
options.limit = options.limit || me.pageSize;
options.addRecords = options.addRecords || false;
if (me.buffered) {
return me.loadToPrefetch(options);
}
var operation;
options = Ext.apply({
action: ‘read‘,
filters: me.filters.items,
sorters: me.getSorters()
}, options);
me.lastOptions = options;
operation = new Ext.data.Operation(options);
if (me.fireEvent(‘beforeload‘, me, operation) !== false) {
me.loading = true;
pagingOptions = Ext.apply({}, options);
if (me.isPaging(pagingOptions)) {
Ext.Function.defer(function () {
if (me.allData) {
me.data = me.allData;
delete me.allData;
}
me.applyPaging();
me.fireEvent("datachanged", me);
me.fireEvent(‘refresh‘, me);
var r = [].concat(me.data.items);
me.loading = false;
me.fireEvent("load", me, r, true);
if (me.hasListeners.read) {
me.fireEvent(‘read‘, me, r, true);
}
if (options.callback) {
options.callback.call(options.scope || me, r, options, true);
}
}, 1, me);
return me;
}
me.proxy.read(operation, me.onProxyLoad, me);
}
return me;
},
insert: function (index, records) {
var me = this,
sync = false,
i,
record,
len;
records = [].concat(records);
for (i = 0, len = records.length; i < len; i++) {
record = me.createModel(records[i]);
record.set(me.modelDefaults);
// reassign the model in the array in case it wasn‘t created yet
records[i] = record;
me.data.insert(index + i, record);
record.join(me);
sync = sync || record.phantom === true;
}
if (me.allData) {
me.allData.addAll(records);
}
if (me.snapshot) {
me.snapshot.addAll(records);
}
if (me.requireSort) {
// suspend events so the usual data changed events don‘t get fired.
me.suspendEvents();
me.sort();
me.resumeEvents();
}
me.fireEvent(‘add‘, me, records, index);
me.fireEvent(‘datachanged‘, me);
if (me.autoSync && sync && !me.autoSyncSuspended) {
me.sync();
}
},
doSort: function (sorterFn) {
var me = this,
range,
ln,
i;
if (me.remoteSort) {
// For a buffered Store, we have to clear the prefetch cache since it is keyed by the index within the dataset.
// Then we must prefetch the new page 1, and when that arrives, reload the visible part of the Store
// via the guaranteedrange event
if (me.buffered) {
me.pageMap.clear();
me.loadPage(1);
} else {
//the load function will pick up the new sorters and request the sorted data from the proxy
me.load();
}
} else {
if (me.allData) {
me.data = me.allData;
delete me.allData;
}
me.data.sortBy(sorterFn);
if (!me.buffered) {
range = me.getRange();
ln = range.length;
for (i = 0; i < ln; i++) {
range[i].index = i;
}
}
me.applyPaging();
me.fireEvent(‘datachanged‘, me);
me.fireEvent(‘refresh‘, me);
}
},
getTotalCount: function () {
return this.allData ? this.allData.getCount() : this.totalCount || 0;
},
//inherit docs
getNewRecords: function () {
if (this.allData) {
return this.allData.filterBy(this.filterNew).items;
}
return this.data.filterBy(this.filterNew).items;
},
//inherit docs
getUpdatedRecords: function () {
if (this.allData) {
return this.allData.filterBy(this.filterUpdated).items;
}
return this.data.filterBy(this.filterUpdated).items;
},
remove: function (records, /* private */ isMove) {
if (!Ext.isArray(records)) {
records = [records];
}
/*
* Pass the isMove parameter if we know we‘re going to be re-inserting this record
*/
isMove = isMove === true;
var me = this,
sync = false,
i = 0,
length = records.length,
isNotPhantom,
index,
record;
for (; i < length; i++) {
record = records[i];
index = me.data.indexOf(record);
if (me.allData) {
me.allData.remove(record);
}
if (me.snapshot) {
me.snapshot.remove(record);
}
if (index > -1) {
isNotPhantom = record.phantom !== true;
// don‘t push phantom records onto removed
if (!isMove && isNotPhantom) {
// Store the index the record was removed from so that rejectChanges can re-insert at the correct place.
// The record‘s index property won‘t do, as that is the index in the overall dataset when Store is buffered.
record.removedFrom = index;
me.removed.push(record);
}
record.unjoin(me);
me.data.remove(record);
sync = sync || isNotPhantom;
me.fireEvent(‘remove‘, me, record, index);
}
}
me.fireEvent(‘datachanged‘, me);
if (!isMove && me.autoSync && sync && !me.autoSyncSuspended) {
me.sync();
}
},
filter: function (filters, value) {
if (Ext.isString(filters)) {
filters = {
property: filters,
value: value
};
}
var me = this,
decoded = me.decodeFilters(filters),
i = 0,
doLocalSort = me.sorters.length && me.sortOnFilter && !me.remoteSort,
length = decoded.length;
for (; i < length; i++) {
me.filters.replace(decoded[i]);
}
if (me.remoteFilter) {
// So that prefetchPage does not consider the store to be fully loaded if the local count is equal to the total count
delete me.totalCount;
// For a buffered Store, we have to clear the prefetch cache because the dataset will change upon filtering.
// Then we must prefetch the new page 1, and when that arrives, reload the visible part of the Store
// via the guaranteedrange event
if (me.buffered) {
me.pageMap.clear();
me.loadPage(1);
} else {
// Reset to the first page, the filter is likely to produce a smaller data set
me.currentPage = 1;
//the load function will pick up the new filters and request the filtered data from the proxy
me.load();
}
} else {
/**
* @property {Ext.util.MixedCollection} snapshot
* A pristine (unfiltered) collection of the records in this store. This is used to reinstate
* records when a filter is removed or changed
*/
if (me.filters.getCount()) {
me.snapshot = me.snapshot || me.allData.clone() || me.data.clone();
if (me.allData) {
me.data = me.allData;
delete me.allData;
}
me.data = me.data.filter(me.filters.items);
me.applyPaging();
if (doLocalSort) {
me.sort();
} else {
// fire datachanged event if it hasn‘t already been fired by doSort
me.fireEvent(‘datachanged‘, me);
me.fireEvent(‘refresh‘, me);
}
}
}
},
clearFilter: function (suppressEvent) {
var me = this;
me.filters.clear();
if (me.remoteFilter) {
// In a buffered Store, the meaing of suppressEvent is to simply clear the filters collection
if (suppressEvent) {
return;
}
// So that prefetchPage does not consider the store to be fully loaded if the local count is equal to the total count
delete me.totalCount;
// For a buffered Store, we have to clear the prefetch cache because the dataset will change upon filtering.
// Then we must prefetch the new page 1, and when that arrives, reload the visible part of the Store
// via the guaranteedrange event
if (me.buffered) {
me.pageMap.clear();
me.loadPage(1);
} else {
// Reset to the first page, clearing a filter will destroy the context of the current dataset
me.currentPage = 1;
me.load();
}
} else if (me.isFiltered()) {
me.data = me.snapshot.clone();
delete me.allData;
delete me.snapshot;
me.applyPaging();
if (suppressEvent !== true) {
me.fireEvent(‘datachanged‘, me);
me.fireEvent(‘refresh‘, me);
}
}
},
isFiltered: function () {
var snapshot = this.snapshot;
return !!snapshot && snapshot !== (this.allData || this.data);
},
filterBy: function (fn, scope) {
var me = this;
me.snapshot = me.snapshot || me.allData.clone() || me.data.clone();
me.data = me.queryBy(fn, scope || me);
me.applyPaging();
me.fireEvent(‘datachanged‘, me);
me.fireEvent(‘refresh‘, me);
},
queryBy: function (fn, scope) {
var me = this,
data = me.snapshot || me.allData || me.data;
return data.filterBy(fn, scope || me);
},
collect: function (dataIndex, allowNull, bypassFilter) {
var me = this,
data = (bypassFilter === true && (me.snapshot || me.allData)) ? (me.snapshot || me.allData) : me.data;
return data.collect(dataIndex, ‘data‘, allowNull);
},
getById: function (id) {
return (this.snapshot || this.allData || this.data).findBy(function (record) {
return record.getId() === id;
});
},
removeAll: function (silent) {
var me = this;
me.clearData();
if (me.snapshot) {
me.snapshot.clear();
}
if (me.allData) {
me.allData.clear();
}
// Special handling to synch the PageMap only for removeAll
// TODO: handle other store/data modifications WRT buffered Stores.
if (me.pageMap) {
me.pageMap.clear();
}
if (silent !== true) {
me.fireEvent(‘clear‘, me);
}
}
});
  

  

时间: 2024-10-22 08:11:04

ExtJS远程数据-本地分页的相关文章

ExtJS ComboBox同时加载远程和本地数据

ExtJS ComboBox同时加载远程和本地数据 原文:http://gblog.hbcf.net/index.php/archives/233 ComboBox比较特殊需求,将远程数据和本地数据同时加载.其实,还是先加载远程,在将本地数据塞进获取到的远程数据中去.大概的代码如下(网上得来,未验证,以备用) //首先远程读取数据 var seriesStore = new Ext.data.JsonStore({ url: '', fields: ['seriesid', 'seriesnam

本地远程数据同步之rsync

一.简介 rsync一款开源的,快速的,多功能可实现全量及增量的本地或者远程数据同步备份的优秀工具可适用于多个操作系统之上,rsync具有可以使本地和远程两台主机之间的数据快速的同步和备份的用能. 二.工作模式及其特性  1.rsync特性   支持拷贝特殊文件如链接文件设备等 可以有排除指定文件或目录同步的功能,相当于打包命令tar的排除功能 --exclude 可以实现增量同步,即只同步发生变化的数据,因此效率很高 实时备份和定时备份   2.rsync命令的工作模式: 1)shell模式,

rsync远程数据备份配置之再次总结

一.实验环境 主机名  网卡ip  默认网关 用途 nfs-server 10.0.0.11 10.0.0.254 rsync服务器端 web-client01 10.0.0.12 10.0.0.254 rsync客服端 web-client02 10.0.0.13 10.0.0.254 rsync客服端 二.实验步骤 1.什么是rsync?rsync是一款开源的,快速的,多功能的可实现全量及增量的数据备份同步的优秀工具,适用于多种操作系统之上.2.rsync特性1)支持拷贝链接文件特殊文件2)

Linux的rsync远程数据同步工具

Rsync(remote synchronize) 是一个远程数据同步工具,可以使用"Rsync算法"同步本地和远程主机之间的文件. rsync的好处是只同步两个文件不同的部分,相同的部分不在传递.类似于增量备份, 这使的在服务器传递备份文件或者同步文件,比起scp工具要省好多时间. OS:ubuntu server 10.04 server:192.168.64.128 client:192.168.64.145 server 1.ubuntu  server 10.04默认已安装r

Linux 远程和本地的一些解决方式

 有的小伙伴想Linux 远程登录 两台机器同一时候root登录.事实上能够同一时候多个用户的. Linux是多用户的多任务系统,能够同一时候多个用户登录到系统,也能够一个用户通过不同终端登录到一个系统运行不同的操作: [email protected]:~# w 22:42:31 up 32 days, 6:03, 1 user, load average: 0.00, 0.01, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0

rsync也可以远程数据同步

rsync简介 rsync(rem ote synchronize)是一个远程数据同步工具,可通过 LAN/WAN 快速同步多台主机之间的文件.也可以使用 rsync 同步本地硬盘中的不同目录. rsync是用于替代rcp的一个工具,rsync 使用所谓的 rsync算法进行数据同步,这种算法只传送两个文件的不同部分,而不是每次都整份传送,因此速度非常快. rsync支持大多数的类 Unix 系统,无论是 Linux.Solaris 还是 BSD上 都经过了良好的测试. CentOS系统默认就安

jQuery EasyUI datagrid实现本地分页的方法

本文实例讲述了jQuery EasyUI datagrid实现本地分页的方法.分享给大家供大家参考.具体如下: 一般分页都是后台做,前端做无论从哪方面考虑都不合适.但是有的时候还是有这种需求. 这里重点用到了pagination的监听,以及JS数组的slice方法来完成.代码如下: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 3

rsync远程数据同步工具应用

一直觉得rsync这个命令复杂不好用,一直在排斥这个工具,抱怨rsync功能简单又复杂难用,这些都不是rsync的问题,只是我不愿意去man这个工具的的帮助文档,其实rsync这个工具也没有想象中那么复杂难用:rsync命令是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件.rsync使用所谓的"rsync算法"来使本地和远程两个主机之间的文件达到同步,这个算法只传送两个文件的不同部分,而不是每次都整份传送,因此速度相当快. rsync是一个功能非常强大的工具,其命令

rsync+inotify实现远程数据备份

一.rsync的基本介绍 1.  什么是rsync Rsync是一款开源的.快速的.多功能的.可以实现增量的本地货远程数据镜像同步备份的优秀工具,Rsync使用与unix,linux,windows等多种平台 2.  Rsync的特性 1)  支持拷贝特殊文件 2)  可以有排除指定文件或目录 3)  可以保持原来文件或目录的权限 4)  可以实现增量同步,即只同步变化的数据 5)  可以使用rcp,ssh等方式配合传输文件 6)  支持匿名或认证的进程模式传输 7) 传输前会进行压缩,适合异地