DBMS_SCHEDULER执行PERL脚本加载数据

1.例子利用oracle 11g 的dbms_scheduler包执行perl脚本加载数据文件,其中主要用到三个过程分别为SET_JOB_ARGUMENT_VALUE,CREATE_JOB,RUN_JOB三个过程,其中三个过程的参数说明如下:

create_job参数:

Attribute Description

job_name


Name of the job


job_class


Name of the job class


job_style


Style of the job:

  • REGULAR
  • LIGHTWEIGHT

program_name


Name of the program that the job runs


job_action


Inline action of the job. This is either the code for an anonymous PL/SQL block or the name of a stored procedure, external executable, or chain.


job_type


Job action type (‘PLSQL_BLOCK‘, ‘STORED_PROCEDURE‘, ‘EXECUTABLE‘, or ‘CHAIN‘)


schedule_name


Name of the schedule that specifies when the job has to execute


repeat_interval


Inline time-based schedule


schedule_limit


Maximum delay time between scheduled and actual job start before a job run is canceled


start_date


Start date and time of the job


end_date


End date and time of the job


event_condition


Event condition for event-based jobs


queue_spec


File watcher name or queue specification for event-based jobs


number_of_arguments


Number of job arguments


arguments


Array of job arguments


job priority


Job priority


job_weight


*** Deprecated in Oracle Database 11gR2. Do not change the value of this attribute from the default, which is 1.

Weight of the job for parallel execution.


max_run_duration


Maximum run duration of the job


max_runs


Maximum number of runs before the job is marked as completed


max_failures


Maximum number of failures tolerated before the job is marked as broken


logging_level


Job logging level


restartable


Indicates whether the job is restartable (TRUE) or not (FALSE)


stop_on_window_exit


Indicates whether the job is stopped when the window that it runs in ends (TRUE) or not (FALSE). Equivalent to thestop_on_window_close job attribute described in the
SET_ATTRIBUTE Procedure.


raise_events


State changes that raise events


comments


Comments on the job


auto_drop


If TRUE (the default), indicates that the job should be dropped once completed


enabled


Indicates whether the job should be enabled immediately after creating it (TRUE) or not (FALSE)


follow_default_timezone


If TRUE and if the job start_date is null, then when thedefault_timezone scheduler attribute is changed, the Scheduler recomputes the next run date and time for this job so that it is in accordance with the new time
zone.


parallel_instances


For event-based jobs only.

If TRUE, on the arrival of the specified event, the Scheduler creates a new lightweight job to handle that event, so multiple instances of the same event-based job can run in parallel.

If FALSE, then an event is discarded if it is raised while the job that handles it is already running,


aq_job


For internal use only


instance_id


The instance ID of the instance that the job must run on


credential_name


The credential to use for a single destination or the default credential for a group of destinations


destination


The name of a single external destination or database destination, or a group name of type external destination or database destination


database_role


In an Oracle Data Guard environment, the database role (‘PRIMARY‘ or ‘LOGICALSTANDBY‘) for which the job runs


allow_runs_in_restricted_mode


If TRUE, the job is permitted to run when the database is in restricted mode, provided that the job owner is permitted to log in during this mode

SET_JOB_ARGUMENT_VALUE参数:

Parameter Description

job_name


The name of the job to be altered


argument_name


The name of the program argument being set


argument_position


The position of the program argument being set


argument_value


The new value to be set for the program argument. To set a non-VARCHAR value, use theSET_JOB_ANYDATA_VALUE procedure.

RUN_JOB参数:

Parameter Description

job_name


A job name or a comma-separate list of entries, where each is the name of an existing job, optionally preceded by a schema name and dot separator.

If you specify a multiple-destination job, the job runs on all destinations. In this case, theuse_current_session argument must be
FALSE.


use_current_session


This specifies whether or not the job run should occur in the same session that the procedure was invoked from.

When use_current_session is set to TRUE:

  • The job runs as the user who called RUN_JOB, or in the case of a local external job with a credential, the user named in the credential.
  • You can test a job and see any possible errors on the command line.
  • run_count, last_start_date, last_run_duration, andfailure_count are not updated.
  • RUN_JOB can be run in parallel with a regularly scheduled job run.

When use_current_session is set to FALSE:

  • The job runs as the user who is the job owner.
  • You need to check the job log to find error information.
  • run_count, last_start_date, last_run_duration, andfailure_count are updated.
  • RUN_JOB fails if a regularly scheduled job is running.

For jobs that have a specified destination or destination group, or point to chains or programs with the detached attribute set toTRUE,
use_current_session must be FALSE

由于本例中是调用操作系统的sqlldr命令去实现数据文件的加载,所以要用到create_job过程创建的job_type为‘EXECUTABLE‘的job去实现,其中job_type含义如下

  • ‘PLSQL_BLOCK

    This specifies that the job is an anonymous PL/SQL block. Job or program arguments are not supported when the job or program type is PLSQL_BLOCK. In this case, the number of arguments must be 0.

  • ‘STORED_PROCEDURE‘

    This specifies that the job is a PL/SQL or Java stored procedure, or an external C subprogram. Only procedures, not functions with return values, are supported.

  • ‘EXECUTABLE‘

    This specifies that the job is external to the database. External jobs are anything that can be executed from the command line of the operating system. Anydata arguments are not supported with a job or program type of EXECUTABLE.
    The job owner must have the CREATE EXTERNAL JOB system privilege before the job can be enabled or run.

  • CHAIN

    This specifies that the job is a chain. Arguments are not supported for a chain, so number_of_arguments must be 0.

2.由于用到dbms_scheduler包创建‘EXECUTABLE‘类型的job,需要对操作系统用户及数据库用户配置,以ETL(操作系统用户),ETL_TEST(数据库用户)为例进行配置,实验环境为Redhat5.5+Oracle11G(11.2.3)+Perl(5.8.8)

a.创建操作系统用户

[[email protected] ~]# useradd -d /home/etl/ -m etl

[[email protected] ~]# passwd etl

Changing password for user etl.

New UNIX password:

BAD PASSWORD: it is based on a dictionary word

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

注明:在linux系统中如果没有指定创建用户的组,系统会默认创建一个与用户名一致的用户组

b.配置用户ETL的环境变量(/home/etl/.bash_profile),其中红色字体与Oracle用户保持一致即可

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export ORACLE_SID=ETL

export ORACLE_TERM=xterm

export NLS_LANG="SIMPLIFIED CHINESE_CHINA.ZHS16GBK"

export ORA_NLS33=$ORACLE_HOME/common/nls/admin/data

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/lib

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

export LD_LIBRARY_PATH

export PATH=$PATH:$ORACLE_HOME/bin

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export ORACLE_SID=ETL

export ORACLE_TERM=xterm

export NLS_LANG="SIMPLIFIED CHINESE_CHINA.ZHS16GBK"

export ORA_NLS33=$ORACLE_HOME/common/nls/admin/data

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/lib

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

export LD_LIBRARY_PATH

export PATH=$PATH:$ORACLE_HOME/bin

PATH=$PATH:$HOME/bin

c.配置脚本及数据文件相关路径

[[email protected] /]# mkdir /ETL

[[email protected] /]# cd ETL

[[email protected] ETL]# mkdir bad

[[email protected] ETL]# mkdir log

[[email protected] ETL]# mkdir loader

[[email protected] ETL]# mkdir control

[[email protected] ETL]# mkdir data

[[email protected] ETL]# mkdir backup

[[email protected] ETL]# mkdir sh

[[email protected] ETL]# mkdir perl

[[email protected] ETL]# cd ..

[[email protected] /]# chown -R etl:etl /ETL

[[email protected] /]# chmod -R 777 /ETL

(目录说明:bad(sqlldr加载数据文件被拒的记录),log(sqlldr加载数据文件日志),loader(加载数据文件的perl脚本),control(sqlldr加载数据文件所用到的控制文件),data(sqlldr加载的数据文件,backup(数据文件的备份目录),sh(shell脚本目录),per(perl脚本目录).

d.因为此次实验是用ETL_TEST(数据库用户)调用dbms_schduler包以ETL用户身份加载数据(sqlldr加载),以下为执行‘EXECUTABLE‘的job相关配置

1.查看$ORACLE_HOME/rdbms/admin/externaljob.ora 权限

[[email protected] ~]# su - oracle

[[email protected] ~]$ cd $ORACLE_HOME

[[email protected] dbhome_1]$ pwd

/u01/app/oracle/product/11.2.0/dbhome_1

[[email protected] dbhome_1]$ exit

logout

[[email protected] ~]# cd /u01/app/oracle/product/11.2.0/dbhome_1

[[email protected] dbhome_1]# cd rdbms

[[email protected] rdbms]# cd admin

[[email protected] admin]# ls -al|grep externaljob.ora

-rw-r-----  1 root   oinstall    1536 Jan 30 13:28 externaljob.ora

(其中文件权限必须和上面一致)

2.配置$ORACLE_HOME/rdbms/admin/externaljob.ora,将run_user=etl run_group=etl 具体如下:

[[email protected] admin]# vi externaljob.ora

# $Header: externaljob.ora 16-dec-2005.20:47:13 rramkiss Exp $

#

# Copyright (c) 2005, Oracle. All rights reserved.

# NAME

#   externaljob.ora

# FUNCTION

#   This configuration file is used by dbms_scheduler when executing external

#   (operating system) jobs. It contains the user and group to run external

#   jobs as. It must only be writable by the owner and must be owned by root.

#   If extjob is not setuid then the only allowable run_user

#   is the user Oracle runs as and the only allowable run_group is the group

#   Oracle runs as.

#

# NOTES

#   For Porters: The user and group specified here should be a lowly privileged

#                user and group for your platform. For Linux this is nobody

#                and nobody.

# MODIFIED

#     rramkiss   12/09/05 -  Creation

#

##############################################################################

# External job execution configuration file externaljob.ora

#

# This file is provided by Oracle Corporation to help you customize

# your RDBMS installation for your site.  Important system parameters

# are discussed, and default settings given.

#

# This configuration file is used by dbms_scheduler when executing external

# (operating system) jobs. It contains the user and group to run external

# jobs as. It must only be writable by the owner and must be owned by root.

# If extjob is not setuid then the only allowable run_user

# is the user Oracle runs as and the only allowable run_group is the group

# Oracle runs as.

run_user =etl

run_group =etl

    3.查看$ORACLE_HOME/bin/extjob文件权限

[[email protected] admin]# su - oracle

[[email protected] ~]$ cd $ORACLE_HOME

[[email protected] dbhome_1]$ pwd

/u01/app/oracle/product/11.2.0/dbhome_1

[[email protected] dbhome_1]$ exit

logout

[[email protected] admin]# cd /u01/app/oracle/product/11.2.0/dbhome_1

[[email protected] dbhome_1]# ls -al|grep extjob

[[email protected] dbhome_1]# cd bin

[[email protected] bin]# ls -al|grep extjob

-rwsr-x---  1 root   oinstall   1249595 Jan 18 00:53 extjob

-rwx------  1 oracle oinstall   1249595 Jan 18 00:53 extjobo

-rwxr-xr-x  1 oracle oinstall   1249958 Sep 17  2011 extjobO

-rwxr-xr-x  1 oracle oinstall   1249958 Sep 17  2011 extjoboO

(注明:extjob文件权限必须与上面保持一致)

e.创建ETL_TEST(数据库用户),并给相应权限

1.创建 ETL_TEST用户

create user etl_test identified by etl_test

default tablespace users

temporary tablespace temp;

  2.赋于相关的系统和对象权限

grant connect, resource to etl_test;

grant select on sys.v_$session to etl_test;

grant select on sys.v_$process to etl_test;

grant create job to etl_test;

grant create any job to etl_test;

grant create external job to etl_test;

grant MANAGE SCHEDULER to etl_test;

grant alter system to etl_test;

grant execute on DBMS_LOCK to etl_test;

grant execute on DBMS_PIPE to etl_test;

grant execute on UTL_FILE to etl_test;

grant execute on DBMS_SCHEDULER to etl_test;

grant all on DBMS_SCHEDULER to etl_test;

grant execute on DBMS_CRYPTO to etl_test;

grant create any directory to etl_test;

grant debug any procedure, debug connect session to etl_test;

grant select on sys.dba_free_space to etl_test;

grant select on sys.dba_data_files to etl_test;

  3.创建Oracle的Directory并赋权

create or replace directory RWA_FILE_DATA as ‘/ETL/data‘;

create or replace directory RWA_FILE_BAD as ‘/ETL/bad‘;

create or replace directory RWA_FILE_LOG as ‘/ETL/log‘;

create or replace directory RWA_FILE_CONTROL as ‘/ETL/control‘;

create or replace directory RWA_FILE_LOADER as ‘/ETL/loader‘;

create or replace directory RWA_FILE_SH as ‘/ETL/sh‘;

create or replace directory RWA_FILE_BACKUP as ‘/ETL/backup‘;

create or replace directory RWA_FILE_PERL as ‘/ETL/perl‘;

grant read, write on directory RWA_FILE_DATA to etl_test;

grant read, write on directory RWA_FILE_PERL to etl_test;

grant read, write on directory RWA_FILE_BAD to etl_test;

grant read, write on directory RWA_FILE_LOG to etl_test;

grant read, write on directory RWA_FILE_CONTROL to etl_test;

grant read, write on directory RWA_FILE_LOADER to etl_test;

grant read, write on directory RWA_FILE_SH to etl_test;

grant read, write on directory RWA_FILE_BACKUP  to etl_test;

f.加载数据文件

  1.加载数据的表

create table F_MUREX_GL

(

data_dt  DATE,

areano   VARCHAR2(10),

currency VARCHAR2(10),

apcode   VARCHAR2(20),

orgcde   VARCHAR2(20),

damount  NUMBER,

camount  NUMBER,

remark   VARCHAR2(1000)

);

2.加载数据的控制文件,数据文件,shell脚本,perl脚本如下

a.RWA_EDW_PLEDGE_IMPAWN_INFO.ctl -- sqlldr控制文件 目录:/ETL/control

[[email protected] control]$ more RWA_EDW_PLEDGE_IMPAWN_INFO.ctl

load data

TRUNCATE into table F_MUREX_GL

fields terminated by X‘01‘

trailing nullcols

(DATA_DT DATE‘yyyy-mm-dd‘,AREANO,CURRENCY,APCODE,ORGCDE,DAMOUNT,CAMOUNT,REMARK)

 b.RWA_EDW_PLEDGE_IMPAWN_INFO.sh -- 加载数据的shell文件 目录:/ETL/loader

[[email protected] loader]$ more RWA_EDW_PLEDGE_IMPAWN_INFO.sh

#!/bin/sh

. /home/etl/.bash_profile

vOraPwd=$1

sqlldr userid=etl_test/[email protected] control=/ETL/control/RWA_EDW_PLEDGE_IMPAWN_INFO.ctl data=/ETL/data/RWA_EDW_RWA_PLEDGE_IMPAWN_INFO_20140630_001.txt log=/ETL/log/RWA

_EDW_PLEDGE_IMPAWN_INFO.log bad=/ETL/bad/RWA_EDW_PLEDGE_IMPAWN_INFO.bad

c.RWA_EDW_PLEDGE_IMPAWN_INFO.pl  -- 调用加载数据文件的shell脚本(RWA_EDW_PLEDGE_IMPAWN_INFO.sh)

#! /usr/bin/perl

########################################################

# @name :RWA_EDW_PLEDGE_IMPAWN_INFO.pl

# @parameter : db user pasaword

# @description : run RWA_EDW_PLEDGE_IMPAWN_INFO.sh and load data to table F_MUREX_GL

#

# @create_date :2015-02-09

# @author :Tux

# @version :1.0.0

# @source :

# @target :

# @modify :

# @copyright :

####################################################################

use strict;

my $passwd;

my $clm_shell = ‘/ETL/loader/RWA_EDW_PLEDGE_IMPAWN_INFO.sh‘;

$passwd = $ARGV[0];

# run shell script

eval {

system("sh $clm_shell $passwd");

};

if ([email protected] ne ‘‘) {

die "execute sqlldr  script failed\n";

}

else

{

print  "the sqlldr script run sucessessfull !!\n";

}

d.RWA_EDW_RWA_PLEDGE_IMPAWN_INFO_20140630_001.txt  -- 数据文件

[[email protected] data]$ more RWA_EDW_RWA_PLEDGE_IMPAWN_INFO_20140630_001.txt

2014-06-30^^^^

2014-06-30   00350AED0232    6114                  0.000        1000000.000

2014-06-30   00350AUD0148    6107            4538300.000              0.000

2014-06-30   00350AUD0110    6107            1526300.000              0.000

2014-06-30   00350AUD0971    6107            8006100.000              0.000

2014-06-30   00350AUD0158    6107                154.430              0.000

2014-06-30   00350AUD5497    6108                  0.000          15200.000

2014-06-30   00350AUD0155    6108            1000000.000              0.000

2014-06-30   00350CAD0239    6107                  0.000         950000.000

2014-06-30   00350CAD0247    6107                  0.000         950000.000

2014-06-30   00350CAD9317    6107             262222.000              0.000

2014-06-30   00350CAD0123    6114            1000000.000              0.000

2014-06-30   00350CHF0971    6107            2383200.000              0.000

2014-06-30   00036CNY9867    6118             572590.240              0.000

2014-06-30   00350CNY7066    6118            9000000.000              0.000

2014-06-30   00036CNY7048    6118         7546536516.090              0.000

2014-06-30   00036CNY6814    6118            1323765.700              0.000

2014-06-30   00350CNY5512    6107                  0.000     1089729877.740

2014-06-30   00350CNY9861    6118             248471.230              0.000

2014-06-30   00350CNY9887    6118               2666.660              0.000

2014-06-30   00036CNY5433    6104                  0.000         130676.080

2014-06-30   00350CNY5453    6107          500495181.380              0.000

2014-06-30   00350CNY5562    6107                  0.000       50571245.930

2014-06-30   00350CNY7591    6124           69524032.280              0.000

2014-06-30   00036CNY9971    6104            4313539.170              0.000

2014-06-30   00036CNY6025    6121                105.860              0.000

2014-06-30   00350CNY5149    6118                  0.000         108304.850

2014-06-30   00350CNY5605    6110                  0.000        1790434.050

2014-06-30   00350CNY7274    6110           78709183.050              0.000

2014-06-30   00350CNY7272    6110            1000000.000              0.000

2014-06-30   00350CNY9322    6107                  0.000      316363894.130

2014-06-30   00350CNY8804    6120             328682.870              0.000

2014-06-30   00350CNY0961    6116             330410.960              0.000

2014-06-30   00350CNY0845    6116                  0.000        3041470.520

2014-06-30   00350CNY0745    6116            3041470.520              0.000

2014-06-30   00350CNY0975    6116          200000000.000              0.000

2014-06-30   00350CNY8578    6110                  0.000       98844709.540

2014-06-30   00350CNY7492    6110            3501396.240              0.000

2014-06-30   00350CNY8635    6110           98844709.540              0.000

2014-06-30   00350EUR9875    6112                  0.000        3000000.000

3.创建加载数据文件的EXCUTABLE类型job,执行并查看日志

-- 创建executable job

begin

dbms_scheduler.create_job(

job_name => ‘LF_PERL‘,

job_type => ‘EXECUTABLE‘,

job_action => ‘/ETL/loader/RWA_EDW_PLEDGE_IMPAWN_INFO.pl‘,

start_date => systimestamp,

number_of_arguments => 1,

--  job_class       => ‘no_logging_class‘,

auto_drop => true,

comments => ‘LF_PERL‘);

end;

-- 传入参数

begin

dbms_scheduler.set_job_argument_value(job_name => ‘LF_PERL‘,

argument_position =>1 ,

argument_value =>‘etl_test‘ );

end;

-- 执行

begin

dbms_scheduler.run_job(job_name =>‘LF_PERL‘);

end;

-- 删除job

begin

dbms_scheduler.drop_job(job_name => ‘LF_PERL‘);

end;

-- 清除job日志

begin

dbms_scheduler.purge_log(job_name => ‘LF_PERL‘);

end;

-- 查看JOB

select * from user_scheduler_jobs t

where job_name = ‘LF_PERL‘

;

-- 查看JOB执行情况

select * from user_scheduler_job_run_details t

where job_name = ‘LF_PERL‘

时间: 2024-10-26 05:33:12

DBMS_SCHEDULER执行PERL脚本加载数据的相关文章

iscroll5 上拉,下拉 加载数据

我这里的思路是上拉时候只是加载第一页的内容,可根据实际情况修改其中的代码.请勿照搬.样式没怎么调,可以加载gif动画.1.没有数据时候,下拉可以加载数据.2.没有数据时候,点击也可以加载数据.3.其余正常. 4.只要页面没有 <div id="pullDown"> <div class="pullDownLabel">正在加载中...</div> </div> 这段代码就不会执行下拉加载数据.//没有上拉时候用到的ht

浏览器环境下JavaScript脚本加载与执行探析之动态脚本与Ajax脚本注入

在<浏览器环境下JavaScript脚本加载与执行探析之defer与async特性>中,我们研究了延迟脚本(defer)和异步脚本(async)的执行时机.浏览器支持情况.浏览器bug以及其他的细节问题.而除了defer和async特性,动态脚本和Ajax脚本注入也是两种常用的创建无阻塞脚本的方法.总的来看,这两种方法都能达到脚本加载不影响页面解析和渲染的作用,但是在不同的浏览器中,这两种技术所创建的脚本的执行时机还是有一定差异,今天我们再来探讨一下通过动态脚本技术和Ajax注入的脚本在这些方

关于Jquery,js脚本加载执行先后顺序的一些事

好久没用Jquery了,最近有个东西需要写一下,就想了想那些让脚本延缓执行的一些方法有什么区别呢. (1)当然,第一种方式就是将脚本内嵌在html中相应需要执行的地方,需要等到页面加载完成的话就放在最后.当然这已经是大家很不屑的做法,一般也就是一些测试啊什么的时候会用用或者代码就一几句话,其他很少会这么做了.作为外部文件引入的优点也就不赘述了,一般都明白. (2)将<Script>标签从head放到〈/body〉之前,其他html内容之后.这也很好理解,主要的html加载完之后,脚本才开始运行

asp.net网站开发中用jquery实现滚动浏览器滚动条加载数据(类似于腾讯微博)

自从腾讯微博上线以来,基本上就开始用了,一直到现在,作为一个开发人员,也看到了腾讯微博一直在不停的改变,也不知道大家有没有发现,腾讯微博提供两种加载数据的方式,一种是分页,一种是滚动浏览器滚动条加载数据,分页功能我想大家都做得太多了,今天我与大家分享一下我用滚动条滚动加载数据,小生不才,还望各位大侠指教,呵呵~ 下面开讲: 首先说一下思路,我用的是Jquery,然后通过Jquery的ajax()方法通过 HTTP 请求加载远程数据来实现的,用到Jquery,首先要应用jquery.min.js类

Spring Boot 启动加载数据 CommandLineRunner

实际应用中,我们会有在项目服务启动的时候就去加载一些数据或做一些事情这样的需求. 为了解决这样的问题,spring Boot 为我们提供了一个方法,通过实现接口 CommandLineRunner 来实现. 很简单,只需要一个类就可以,无需其他配置. 创建实现接口 CommandLineRunner 的类 Spring Boot应用程序在启动后,会遍历CommandLineRunner接口的实例并运行它们的run方法.也可以利用@Order注解(或者实现Order接口)来规定所有CommandL

(转帖)关于easyui中的datagrid在加载数据时候报错:无法获取属性&quot;Length&quot;的值,对象为null或未定义

结贴说明: 很感谢sp1234等人的热心帮忙和提醒,现在我主要说明下问题所在: 首先我在独立的js文件中,直接把测试数据loaddata进去datagrid是没有问题的.var kk = {"total":2,"rows":[{"INSTANCE_ID":"BI00000011","BUSINESS_NAME":"专项检查文档资料报送","INSTANCE_STATUS&quo

SpringMVC+Jquery -页面异步加载数据

背景: 做项目时涉及到页面,当我打算在controller中传一个list到页面,然后通过<c:foreach>循环遍历出来时,同事说:你这样每次都要刷新,这都是几百年前使用的技术了.你用post实现异步加载数据.然后就...... ResultUtil.java 工具类: package com.sgcc.uds.fs.config.web.util; import java.util.HashMap; import java.util.Map; import net.sf.json.JSO

巧用ajax请求服务器加载数据列表时提示loading

我们利用weui.js中的weui.loading为效果,ajax的beforeSend与complete方法,做一个加载数据时会有几秒的 loading... 要在页面需要加载的JS文件: <script src="../js/libs/weui.min.js"></script> 可以去weui的文档中下载,这是它的demo:   https://weui.io/weui.js/ 这里主要讲jQuery ajax的get,查询数据时,它的结构为: $.aja

自定义ListView动态加载数据

我们都知道网络取数据是耗时操作,如果我们一次性请求所有数据,假如数据量不多那还可以接受,但是如果数据量特别多,那么带来的后果就是用户的愤怒(用户是很没有耐心的),所以这时候我们就需要动态的加载数据,分批加载我们所需的数据,提升用户体验,先上图.           一般如果一个Activity集成越多的功能,代码量也会随之增多,看起来让人烦,我们可以考虑自定义控件将一些操作集成进去. 自定义ListView 1 package com.example.listviewdynamicloading