第一步 首先得下载LaTeXStudio这个集成环境,按照提示安装。
第二步 写Tex文件
代码如下:
\documentclass[journal,onecolumn]{IEEEtran} \usepackage{amsmath,graphicx} \usepackage{CJK} \usepackage{algorithm} %//format of the algorithm \usepackage{algorithmic} %//format of the algorithm \usepackage{ctex} % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} %%重定义算法包require的文字显示 \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \newcommand{\upcite}[1]{\textsuperscript{\textsuperscript{\cite{#1}}}} \begin{document} \title{语音端点检测算法 \upcite{Texton}} \author{黄sir} % The paper headers %\markboth{Journal of \LaTeX\ Class Files,~Vol.~6, No.~1, January~2007}% %{Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Journals} \maketitle \hfill \today \begin{abstract} \boldmath The abstract goes here. \end{abstract} %\begin{IEEEkeywords} %IEEEtran, journal, \LaTeX, paper, template. %\end{IEEEkeywords} \section{测试算法流程} \begin{algorithm}[htb] %算法的开始 \caption{测试算法流程.} %算法的标题 \label{alg:segmentation} %给算法一个标签,这样方便在文中对算法的引用 \begin{algorithmic}[1] %这个1 表示每一行都显示数字 \REQUIRE %算法的输入参数:Input ~~\\由boost算法得到的shape filter的训练结果 \\测试图像I \\类别集合 \ENSURE ~~\ %算法的输出:Output \\测试图像各个点所属的类别\upcite{16bitmcuspeech} \STATE \textbf{计算shape filter的响应} \\对一幅图片的每个像素,计算对700个shape filter的响应(每个shape filter对各个类别均产生一个输出值!) \\共有21类 \\对每一像素点,共有700*21个响应,加和每一类的700个响应,得到该点是该类的概率 \STATE \textbf{找最大后验概率} \\对每一点,找到后验概率最大的类,即为该点的类别 \end{algorithmic} \end{algorithm} %\subsection{Subsection Heading Here} %\subsubsection{Subsubsection Heading Here} %Subsubsection text here. \bibliographystyle{IEEEtran} \bibliography{myBib} \end{document}
第三步、制作bib文件,这部分可以和Endnote结合起来做,具体步骤参考网上
@article {Texton, author = {Shotton, Jamie and Winn, John and Rother, Carsten and Criminisi, Antonio}, affiliation = {University of Cambridge Machine Intelligence Laboratory Trumpington Street Cambridge CB2 1PZ UK}, title = {TextonBoost for Image Understanding: Multi-Class Object Recognition and Segmentation by Jointly Modeling Texture, Layout, and Context}, journal = {International Journal of Computer Vision}, publisher = {Springer Netherlands}, issn = {0920-5691}, keyword = {Computer Science}, pages = {2-23}, volume = {81}, issue = {1}, url = {http://dx.doi.org/10.1007/s11263-007-0109-1}, note = {10.1007/s11263-007-0109-1}, year = {2009} } @article{16bitmcuspeech , language = {English}, copyright = {Compilation and indexing terms, Copyright 2014 Elsevier Inc.}, copyright = {Compendex}, title = {Making machines understand us in reverberant rooms: Robustness against reverberation for automatic speech recognition}, journal = {IEEE Signal Processing Magazine}, author = {Yoshioka, Takuya and Sehr, Armin and Delcroix, Marc and Kinoshita, Keisuke and Maas, Roland and Nakatani, Tomohiro and Kellermann, Walter}, volume = {29}, number = {6}, year = {2012}, pages = {114 - 126}, issn = {10535888}, address = {445 Hoes Lane / P.O. Box 1331, Piscataway, NJ 08855-1331, United States}, abstract = {Speech recognition technology has left the research laboratory and is increasingly coming into practical use, enabling a wide spectrum of innovative and exciting voice-driven applications that are radically changing our way of accessing digital services and information. Most of today's applications still require a microphone located near the talker. However, almost all of these applications would benefit from distant-talking speech capturing, where talkers are able to speak at some distance from the microphones without the encumbrance of handheld or body-worn equipment [1]. For example, applications such as meeting speech recognition, automatic annotation of consumer-generated videos, speech-to-speech translation in teleconferencing, and hands-free interfaces for controlling consumer-products, like interactive TV, will greatly benefit from distant-talking operation. Furthermore, for a number of unexplored but important applications, distant microphones are a prerequisite. This means that distant talking speech recognition technology is essential for extending the availability of speech recognizers as well as enhancing the convenience of existing speech recognition applications. © 2012 IEEE.}, key = {Reverberation}, keywords = {Information services;Microphones;Research laboratories;Speech recognition;}, note = {Automatic annotation;Automatic speech recognition;Digital services;Handhelds;Hands-free;Interactive TV;Reverberant room;Speech recognition technology;Speech recognizer;Speech-to-speech translation;Wide spectrum;}, URL = {http://dx.doi.org/10.1109/MSP.2012.2205029}, }
附件,可能会因为缺少IEEEtran.cls这个文件导致你的LaTeX无法编译完成。
这个文件在IEEEtran.cls里
template.tex,myBib.bib,IEEEtran.cls这三个文件放在一个文件夹下就可编译通过,并生产PDF。效果图如下:
时间: 2024-10-08 20:50:28