上个星期由于时间比较紧所以未能继续写下去,今天再接再厉,专心 + 坚持这样离目标就越来越近了!废话少说说正题,今天我们还是来说说java中比较基础的知识,大家知道编写java程序中很多时候都用到了xml文件,有 些是框架本身支持配置的,有些是自定义配置的,这样就要求我们要对这个xml原理要比较了解,其中加载xml文件转换节点元素时有个核心:递归调用转换。 我们可以通过下面方法来查查这个实现类有关的源码:
1 2 3 4 |
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder build = factory.newDocumentBuilder(); Document doc = build.parse(new File("mapred-default.xml")); System.out.println(build.getDOMImplementation().toString()); |
输出其中有:
1 |
com.sun.org.apache.xerces.internal.dom.DOMImplementationImpl |
说明这是由DOMImplementationImpl类来加载xml文件并转化的。现在我就来自己实现递归输出元素,先看下mapred-default.xml这个文件的内容:
lspacing="0">
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
<?xml version="1.0"?> <configuration isok="true"> <property> <name>hadoop.job.history.location</name> <value></value> <description> If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history. </description> </property> <property> <name>hadoop.job.history.user.location</name> <value></value> <description> User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". </description> </property> </configuration> |
从上面的这个文件我们可以分析出:configuration是一个元素,这个元素没属性代码应该要判断;下面这个要特别注意很多人忽视掉的,其子元素究竟是有几个?在 xml中是严格区别空格的,即使就是空格也是一个元素,那现在我们应该知道答案了吧:5个,那么在代码中应该可以判断是空白,这个最怕在面试时候跌倒了。 那现在过了这个空格的元素后,接着就是<property>元素了,这个又跟之前一样哦,那么就应该使用递归来实现,说到递归方法那么就要注 意必须有个条件退出;那着个我知道其他的比如获取子元素之类的应该会有专门的方法获取得到,下面看代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
/** * 解析XML文件 * @param element 节点元素 */ public static void parseXMLFile(Element element){ System.out.print("<" + element.getTagName()); NamedNodeMap attributes = element.getAttributes(); if(attributes != null){ for(int i=0;i<attributes.getLength();i++){ System.out.print(" " + attributes.item(i).getNodeName() + "=\"" + attributes.item(i).getNodeValue() + "\""); } } System.out.print(">"); NodeList childNodes = element.getChildNodes(); for (int i = 0; i < childNodes.getLength(); i++) { if(childNodes.item(i).getNodeType() == Element.ELEMENT_NODE ){ parseXMLFile((Element)childNodes.item(i)); } else{ System.out.print(childNodes.item(i).getTextContent()); } } System.out.print("</" + element.getTagName()); } |
main方法:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
/** * @param args * @throws Exception */ public static void main(String[] args) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder build = factory.newDocumentBuilder(); Document doc = build.parse(new File("mapred-default.xml")); // System.out.println(build.getDOMImplementation().toString()); Element root = doc.getDocumentElement(); parseXMLFile(root); } |
运行结果如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
<configuration isok="true"> <property> <name>hadoop.job.history.location</name <value></value <description> If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history. </description </property <property> <name>hadoop.job.history.user.location</name <value></value <description> User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". </description </property </configuration |
结果一看好,那么这个例子实现了。
这次先到这里。坚持记录点点滴滴!
- 本文来自:Linux教程网