openfile学习笔记

Openfiler是在rPath Linux基础上开发的,它能够作为一个独立的Linux操作系统发行。Openfiler是一款非常好的存储管理操作系统,开源免费,通过web界面对

存储磁盘的管理,支持现在流行的网络存储技术IP-SAN和NAS,支持iSCSI(Internet Small Computer System Interface, 学名ISCSI HBA)、NFS、SMB/CIFS及FTP等

协议。

一. 安装openfiler

先下载一个Openfiler 软件,然后安装到我们的虚拟机上。 Openfiler是基于Linux的存储平台,安装过程和安装一般的Linux系统一样。

下载地址:http://www.openfiler.com/community/download/

安装的第一个界面:

这里有一个磁盘的配置。 我选择了手动配置。 我给openfiler 是40g 的磁盘空间。 系统占2G,交换区1G。 剩下的空间没有分配。

安装完成之后的界面如下:

在这里有提示我们通过web进行访问。 并且提示了访问的地址:

Https://192.168.1.1:446/. 默认账户是openfiler密码为password. 我们登陆之后可以修改用户的密码。

二. 存储端(target)配置

Openfiler 的配置,可以参考Oracle 的这遍文档:

http://www.oracle.com/technology/global/cn/pub/articles/hunter_rac10gr2_iscsi.html#9

http://www.oracle.com/technetwork/cn/articles/hunter-rac11gr2-iscsi-083834-zhs.html#11

2.1 启动iscsi target 服务

在Service 里面启动iscsi target。 启动之后,下次重启会自动启该服务。

2.2 配置iscsi initiator 访问IP

只有配置了IP 才有权限访问openfiler 存储。在system 选项的最下面有配置选项,把IP 写上即可。 注意这里的子网掩码,写的是255.255.255.255

2.3 创建卷设备

现在我们来配置共享设备。 先对我们没有格式的分区格式化成扩展分区,一定要扩展分区:

[root@san ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 5221.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Command action

e extended

p primary partition (1-4)

e

Partition number (1-4): 3

First cylinder (383-5221, default 383):

Using default value 383

Last cylinder or +size or +sizeM or +sizeK (383-5221, default 5221):

Using default value 5221

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@san ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes

255 heads, 63 sectors/track, 5221 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 255 2048256 83 Linux

/dev/sda2 256 382 1020127+ 82 Linux swap / Solaris

/dev/sda3 383 5221 38869267+ 5 Extended

格式化之后,我们在openfiler的网页中就能看到这个磁盘信息,如果不格式化,或者格式化错误,是无法编辑的。

页面网下拉,我们能看到创建分区:

把所有空间全部创建成一个分区,这个就是一个卷。 之后窗口会显示:

创建完成后, 选择volume Groups。

然后输入vg名称和对应的设备,确定即可。

至此,我们已经创建完成了一个叫san的卷组。 但是我们在环境中使用的是卷。 所以我们还需要在这个卷组里创建它的卷。

点击旁边的add volume选项:

在这个页面往下拉,我们可以看到创建卷的选项:

这里我把所有的空间都创建到一个逻辑卷里。

逻辑卷创建完成以后,我们需要创建一个iscsi target, 然后把逻辑卷和这个target 映射起来,这样就可以通过这个target 与服务器进行连接。 点机iSCSI

Target,创建Target IQN

选择LUN Mapping, 将ISCSI 和 逻辑卷对应起来

配置能够访问逻辑卷的Network ACL 权限,这个ip 是在system 的选项里设置的。 这个之前已经配置过。 这里可以设置多个IP, 可以控制哪个IP 允许访问哪个逻辑卷。从而可以多个用户同时使用存储而互不影响。

至此, 存储的服务端已经配置完成。 在这一步,我们创建了一个逻辑卷然后与ISCSI target 进行了对应。 客户端的服务器就通过这个ISCSI target 进行连接。

Openfiler target的配置文件是: /etc/ietd.conf。

[root@san etc]# cat /etc/ietd.conf

##### WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT. #####

Target iqn.2006-01.com.san

HeaderDigest None

DataDigest None

MaxConnections 1

InitialR2T Yes

ImmediateData No

MaxRecvDataSegmentLength 131072

MaxXmitDataSegmentLength 131072

MaxBurstLength 262144

FirstBurstLength 262144

DefaultTime2Wait 2

DefaultTime2Retain 20

MaxOutstandingR2T 8

DataPDUInOrder Yes

DataSequenceInOrder Yes

ErrorRecoveryLevel 0

Lun 0 Path=/dev/san/racshare,Type=blockio,ScsiSN=4YMdbG-SGED-jqHA,ScsiId=4YMdbG-SGED-jqHA,IOMode=wt

[root@san etc]#

Make iSCSI Target(s) Available to Client(s)

Every time a new logical volume is added, you will need to restart the associated service on the Openfiler server. In my case, I created a new

iSCSI logical volume so I needed to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI target available to all

clients on the network who have privileges to access it.

To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target

service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 2)

The same task can be achieved through an SSH session on the Openfiler server:

[root@openfiler1 ~]# service iscsi-target restart

Stopping iSCSI target service: [ OK ]

Starting iSCSI target service: [ OK ]

Configure iSCSI Initiator and New Volume

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In this article, the

client is an Oracle database server (linux3) running CentOS 5.

In this section I will be configuring the iSCSI software initiator on the Oracle database server linux3. Red Hat Enterprise Linux (and CentOS 5)

includes the Open-iSCSI software initiator which can be found in the iscsi-initiator-utils RPM.

This is a change from previous versions of RHEL (4.x) which included the Linux iscsi-sfnet

software driver developed as part of the Linux-iSCSI Project.

All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI.

The iSCSI software initiator on linux3 will be configured to automatically login to the network storage server (openfiler1) and discover the

iSCSI volume created in the previous section. I will then go through the steps of creating a persistent local SCSI device name (i.e.

/dev/iscsi/linux3-data-1) for the iSCSI target name discovered using udev. Having a consistent local SCSI device name and which

iSCSI target it maps to is highly recommended in order to distinguish between multiple SCSI devices. Before I can do any of this, however, I

must first install the iSCSI initiator software!

Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux

http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml[2015/3/31 9:11:34]

Installing the iSCSI (Initiator) Service

With Red Hat Enterprise Linux 5 (and CentOS 5), the Open-iSCSI iSCSI software initiator does not get installed by default. The software is

included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most

cases, it will not be), perform the following on the client node (linux3):

[root@linux3 ~]# rpm -qa | grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD #1 into the machine and perform the following:

[root@linux3 ~]# mount -r /dev/cdrom /media/cdrom

[root@linux3 ~]# cd /media/cdrom/CentOS

[root@linux3 ~]# rpm -Uvh iscsi-initiator-utils-6.2.0.865-0.8.el5.i386.rpm

[root@linux3 ~]# cd /

[root@linux3 ~]# eject

Configure the iSCSI (Initiator) Service

After verifying that the iscsi-initiator-utils package is installed, start the iscsid service and enable it to automatically start when the

system boots. I will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system start up.

[root@linux3 ~]# service iscsid start

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

[root@linux3 ~]# chkconfig iscsid on

[root@linux3 ~]# chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server:

[root@linux3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1

当发现不了iscsi target

修改/etc/initiators.deny注释掉所有的行

Manually Login to iSCSI Target(s)

At this point the iSCSI initiator service has been started and the client node was able to discover the available target(s) from the network storage

server. The next step is to manually login to the available target(s) which can be done using the iscsiadm command-line interface. Note that I

had to specify the IP address and not the host name of the network storage server (openfiler1-san) - I believe this is required given the

discovery (above) shows the targets using the IP address.

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --login

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]

Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Configure Automatic Login

The next step is to make certain the client will automatically login to the target(s) listed above when the machine is booted (or the iSCSI initiator

service is started/restarted):

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --op update -n node.startup -v automatic

Create Persistent Local SCSI Device Names

In this section, I will go through the steps to create a persistent local SCSI device name (/dev/iscsi/linux3-data-1) which will be mapped

to the new iSCSI target name. This will be done using udev. Having a consistent local SCSI device name (for example /dev/mydisk1 or

/dev/mydisk2) is highly recommended in order to distinguish between multiple SCSI devices (/dev/sda or /dev/sdb) when the node is

booted or the iSCSI initiator service is started/restarted.

When the database server node boots and the iSCSI initiator service is started, it will automatically login to the target(s) configured in a random

fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:scsi.linux3-

data-1 may get mapped to /dev/sda when the node boots. I can actually determine the current mappings for all targets (if there were multiple

targets) by looking at the /dev/disk/by-path directory:

[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk ‘{FS=" "; print $9 " " $10 " "

$11}‘)

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:scsi.linux3-data-1 -> ../../sda

Using the output from the above listing, we can establish the following current mappings:

Current iSCSI Target Name to local SCSI Device Name Mappings

Ok, so I only have one target discovered which maps to /dev/sda. But what if there were multiple targets configured (say, iqn.2006-

01.com.openfiler:scsi.linux3-data-2) or better yet, I had multiple removable SCSI devices on linux3? This mapping could change

every time the node is rebooted. For example, if I had a second target discovered on linux3 (i.e. iqn.2006-

01.com.openfiler:scsi.linux3-data-2), after a reboot it may be determined that the second iSCSI target iqn.2006-

01.com.openfiler:scsi.linux3-data-2 gets mapped to the local SCSI device /dev/sda and iqn.2006-

01.com.openfiler:scsi.linux3-data-1 gets mapped to the local SCSI device /dev/sdb or visa-versa.

As you can see, it is impractical to rely on using the local SCSI device names like /dev/sda or /dev/sdb given there is no way to predict the

iSCSI target mappings after a reboot.

What we need is a consistent device name we can reference like /dev/iscsi/linux3-data-1 that will always point to the appropriate iSCSI

target through reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory

using symbolic links that point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client

logging in to an iSCSI target), it matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules

that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run

additional programs (a SHELL script for example) as part of the device event handling process.

The first step is to create a new rules file. This file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line

of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script

(/etc/udev/scripts/iscsidev.sh) to handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on the client node linux3:

# /etc/udev/rules.d/55-openiscsi.rules

KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh

%b",SYMLINK+="iscsi/%c/part%n"

Next, create the UNIX SHELL script that will be called when this event is received. Let‘s first create a separate directory on the linux3 node

where udev scripts can be stored:

[root@linux3 ~]# mkdir -p /etc/udev/scripts

Finally, create the UNIX shell script /etc/udev/scripts/iscsidev.sh:

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive

if [ -z "${target_name}" ]; then

exit 1

fi

# Check if QNAP drive

check_qnap_target_name=${target_name%%:*}

if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then

target_name=`echo "${target_name%.*}"`

fi

echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable:

[root@linux3 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI initiator service

[root@linux3 ~]# service iscsi stop

Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]

Logout of [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Stopping iSCSI daemon: /etc/init.d/iscsi: line 33: 5143 Killed

/etc/init.d/iscsid stop

[root@linux3 ~]# service iscsi start

iscsid dead but pid file exists

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-

01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]

Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

[ OK ]

Let‘s see if our hard work paid off:

[root@linux3 ~]# ls -l /dev/iscsi/

total 0

drwxr-xr-x 2 root root 60 Apr 7 01:57 linux3-data-1

[root@linux3 ~]# ls -l /dev/iscsi/linux3-data-1/

total 0

lrwxrwxrwx 1 root root 9 Apr 7 01:57 part -> ../../sda

The listing above shows that udev did the job is was suppose to do! We now have a consistent set of local device name(s) that can be used to

reference the iSCSI targets through reboots. For example, we can safely assume that the device name /dev/iscsi/linux3-data-1/part

will always reference the iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:

Create Primary Partition on iSCSI Volume

I now need to create a single primary partition on the new iSCSI volume that spans the entire size of the volume. The fdisk command is used

in Linux for creating (and removing) partitions. You can use the default values when creating the primary partition as the default action is to use

the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF

disklabel).

[root@linux3 ~]# fdisk /dev/iscsi/linux3-data-1/part

Command (m for help): n

Command action

e extended

p p primary partition (1-4)

Partition number (1-4): 1

First cylinder (1-36864, default 1): 1

Last cylinder or +size or +sizeM or +sizeK (1-36864, default 36864): 36864

Command (m for help): p

Disk /dev/iscsi/linux3-data-1/part: 38.6 GB, 38654705664 bytes

64 heads, 32 sectors/track, 36864 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/iscsi/linux3-data-1/part1 1 36864 37748720 83 Linux

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Create File System on new iSCSI Volume / Partition

The next step is to create an ext3 file system on the new partition. Provided with the RHEL distribution is a script named /sbin/mkfs.ext3

which makes the task of creating an ext3 file system seamless. Here is an example session of using the mkfs.ext3 script on linux3:

[root@linux3 ~]# mkfs.ext3 -b 4096 /dev/iscsi/linux3-data-1/part1

Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux

http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml[2015/3/31 9:11:34]

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

4718592 inodes, 9437180 blocks

471859 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=0

288 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount the New File System

Now that the new iSCSI volume is partition and formatted, the final step is to mount the new volume. For this example, I will be mounting the

new volume on the directory /u03.

Create the /u03 directory before attempting to mount the new volume:

[root@linux3 ~]# mkdir -p /u03

Next, edit the /etc/fstab on linux3 and add an entry for the new volume:

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1

LABEL=/boot /boot ext3 defaults 1 2

tmpfs /dev/shm tmpfs defaults 0 0

devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0

proc /proc proc defaults 0 0

//ddeevv//ViosclGsir/oluip0n0u/x3Lo-gdVaotla0-11 / p a r t1 s/wua0p3 sewxatp3 d_enfeatudletvs 00 00

cartman:SHARE2 /cartman nfs defaults 0 0

domo:Public /domo nfs defaults 0 0

After making the new entry in the /etc/fstab file, it is now just a matter of mounting the new iSCSI volume:

[root@linux3 ~]# mount /u03

[root@linux3 ~]# df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

56086828 21905480 31286296 42% /

/dev/hda1 101086 19160 76707 20% /boot

tmpfs 1037056 0 1037056 0% /dev/shm

cartman:SHARE2 306562280 8448 306247272 1% /cartman

/ddoemov/:Psduab1li c 1 93179175862490102 3 2 9158109274404 13559008286732146 8 11%8 %/ u/0d3omo

Logout and Remove an iSCSI Target from a Linux Client

It is my hope that this article has provided valuable insight into how you can take advantage of networked storage and the iSCSI configuration

process. As you can see, the process is fairly straightforward. Just as simple as it was to configure the Open-iSCSI Initiator on Linux, it is just as

easy to remove it and that is the subject of this section.

1. Unmount the File System

[root@linux3 ~]# cd

[root@linux3 ~]# umount /u03

After unmounting the file system, remove (or comment out) its related entry from the /etc/fstab file:

# /dev/iscsi/linux3-data-1/part1 /u03 ext3 _netdev 0 0

2. Manually Logout of iSCSI Target(s)

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 –p

192.168.2.195 --logout

Logging out of session [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1,

portal: 192.168.2.195,3260]

Logout of [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client

node, then after logging out from the iSCSI target, the mappings for all targets should be gone and the following command should not

find any files or directories:

[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk ‘{FS=" "; print $9 " " $10

" " $11}‘)

ls: *openfiler*: No such file or directory

3. Delete Target and Disable Automatic Login

Update the record entry on the client node to disable automatic logins to the iSCSI target:

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --op update -n node.startup -v manual

Delete the iSCSI target:

[root@linux3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-

01.com.openfiler:scsi.linux3-data-1

4. Remove udev Rules Files

If the iSCSI target being removed is the only remaining target and you don‘t plan on adding any further iSCSI targets in the future, then it

is safe to remove the iSCSI rules file and its call-out script:

[root@linux3 ~]# rm /etc/udev/rules.d/55-openiscsi.rules

[root@linux3 ~]# rm /etc/udev/scripts/iscsidev.sh

5. Disable the iSCSI (Initiator) Service

If the iSCSI target being removed is the only remaining target and you don‘t plan on adding any further iSCSI targets in the future, then it

is safe to disable the iSCSI Initiator Service:

[root@linux3 ~]# service iscsid stop

[root@linux3 ~]# chkconfig iscsid off

[root@linux3 ~]# chkconfig iscsi off

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as

a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning,

Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database

design in a UNIX / Linux server environment. Jeff‘s other interests include mathematical encryption theory, tutoring advanced mathematics,

programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of

course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at:

http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor‘s degree in Computer

Science and Mathematics.

时间: 2024-10-28 14:59:24

openfile学习笔记的相关文章

UNP学习笔记(第十五章 UNIX域协议)

UNIX域协议是在单个主机上执行客户/服务器通信的一种方法 使用UNIX域套接字有以下3个理由: 1.UNIX域套接字往往比通信两端位于同一个主机的TCP套接字快出一倍 2.UNIX域套接字可用于在同一个主机上的不同进程之间传递描述符 3.UNIX域套接字较新的实现把客户的凭证提供给服务器,从而能够提供额外的安全检查措施 UNIX域中用于标识客户和服务器的协议地址是普通文件系统的路径名.这些路径名不是普通的UNIX文件: 除非他们和UNIX域套接字关联起来,否则无法读写这些文件. 可以查看之前a

Windows phone 8 学习笔记(2) 数据文件操作(转)

Windows phone 8 应用用于数据文件存储访问的位置仅仅限于安装文件夹.本地文件夹(独立存储空间).媒体库和SD卡四个地方.本节主要讲解它们的用法以及相关限制性.另外包括本地数据库的使用方式. 快速导航:一.分析各类数据文件存储方式二.安装文件夹三.本地文件夹(独立存储空间)四.媒体库操作五.本地数据库 一.分析各类数据文件存储方式 1)安装文件夹 安装文件夹即应用安装以后的磁盘根文件夹,它提供只读的访问权限.它在手机中对应的路径为" C:\Data\Programs\{XXXXXXX

Windows phone 8 学习笔记(6) 多任务(转)

Windows phone 8 是一个单任务操作系统,任何时候都只有一个应用处于活跃状态,这里的多任务是指对后台任务的支持.本节我们先讲讲应用程序的运行状态,然后看看支持的后台任务,包括:后台代理.后台音频.后台文件传输.后台辅助线程等. 快速导航:一.应用的状态二.后台代理三.后台音频四.后台文件传输五.后台辅助线程 一.应用的状态 1)应用的运行状态 我们通过图解来分析应用的运行状态,启动并置于前台界面的应用是唯一处于运行状态的,其他的操作,比如win键,后退导出应用,打开选择器和启动器时都

vector 学习笔记

vector 使用练习: /**************************************** * File Name: vector.cpp * Author: sky0917 * Created Time: 2014年04月27日 11:07:33 ****************************************/ #include <iostream> #include <vector> using namespace std; int main

Caliburn.Micro学习笔记(一)----引导类和命名匹配规则

Caliburn.Micro学习笔记(一)----引导类和命名匹配规则 用了几天时间看了一下开源框架Caliburn.Micro 这是他源码的地址http://caliburnmicro.codeplex.com/ 文档也写的很详细,自己在看它的文档和代码时写了一些demo和笔记,还有它实现的原理记录一下 学习Caliburn.Micro要有MEF和MVVM的基础 先说一下他的命名规则和引导类 以后我会把Caliburn.Micro的 Actions IResult,IHandle ICondu

jQuery学习笔记(一):入门

jQuery学习笔记(一):入门 一.JQuery是什么 JQuery是什么?始终是萦绕在我心中的一个问题: 借鉴网上同学们的总结,可以从以下几个方面观察. 不使用JQuery时获取DOM文本的操作如下: 1 document.getElementById('info').value = 'Hello World!'; 使用JQuery时获取DOM文本操作如下: 1 $('#info').val('Hello World!'); 嗯,可以看出,使用JQuery的优势之一是可以使代码更加简练,使开

[原创]java WEB学习笔记93:Hibernate学习之路---Hibernate 缓存介绍,缓存级别,使用二级缓存的情况,二级缓存的架构集合缓存,二级缓存的并发策略,实现步骤,集合缓存,查询缓存,时间戳缓存

本博客的目的:①总结自己的学习过程,相当于学习笔记 ②将自己的经验分享给大家,相互学习,互相交流,不可商用 内容难免出现问题,欢迎指正,交流,探讨,可以留言,也可以通过以下方式联系. 本人互联网技术爱好者,互联网技术发烧友 微博:伊直都在0221 QQ:951226918 -----------------------------------------------------------------------------------------------------------------

Activiti 学习笔记记录(三)

上一篇:Activiti 学习笔记记录(二) 导读:上一篇学习了bpmn 画图的常用图形标记.那如何用它们组成一个可用文件呢? 我们知道 bpmn 其实是一个xml 文件

HTML&CSS基础学习笔记8-预格式文本

<pre>标签的主要作用是预格式化文本.被包围在 pre 标签中的文本通常会保留空格和换行符.而文本也会呈现为等宽字体. <pre>标签的一个常见应用就是用来表示计算机的源代码.当然你也可以在你需要在网页中预显示格式时使用它. 会使你的文本换行的标签(例如<h>.<p>)绝不能包含在 <pre> 所定义的块里.尽管有些浏览器会把段落结束标签解释为简单地换行,但是这种行为在所有浏览器上并不都是一样的. 更多学习内容,就在码芽网http://www.