Spark Getting started

Getting started

Add the maven dependency:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-core</artifactId>
    <version>2.0.0</version>
</dependency>

Start coding:

import static spark.Spark.*;

    public class HelloWorld {
        public static void main(String[] args) {
            get("/hello", (req, res) -> "Hello World");
        }
    }

Run and view:

http://localhost:4567/hello

That was easy, right? Spark is the simplest Java web framework to set up, while still providing enough functionality for many types of projects.

Stopping the Server

By calling the stop() method the server is stopped and all routes are cleared.

Routes

The main building block of a Spark application is a set of routes. A route is made up of three simple pieces:

  • A verb (get, post, put, delete, head, trace, connect, options)
  • A path (/hello, /users/:name)
  • A callback (request, response) -> { }

Routes are matched in the order they are defined. The first route that matches the request is invoked.

get("/", (request, response) -> {
// .. Show something ..
});

post("/", (request, response) -> {
// .. Create something ..
});

put("/", (request, response) -> {
// .. Update something ..
});

delete("/", (request, response) -> {
// .. annihilate something ..
});

options("/", (request, response) -> {
// .. appease something ..
});

Route patterns can include named parameters, accessible via the params method on the request object:

// matches "GET /hello/foo" and "GET /hello/bar"
// request.params(":name") is ‘foo‘ or ‘bar‘
get("/hello/:name", (request, response) -> {
    return "Hello: " + request.params(":name");
});

Route patterns can also include splat (or wildcard) parameters. These parameters can be accessed by using the splat method on the request object:

// matches "GET /say/hello/to/world"
// request.splat()[0] is ‘hello‘ and request.splat()[1] ‘world‘
get("/say/*/to/*", (request, response) -> {
    return "Number of splat parameters: " + request.splat().length;
});

Request

In the handle method request information and functionality is provided by the request parameter:

request.body();               // request body sent by the client
request.cookies();            // request cookies sent by the client
request.contentLength();      // length of request body
request.contentType();        // content type of request.body
request.headers();            // the HTTP header list
request.headers("BAR");       // value of BAR header
request.attributes();         // the attributes list
request.attribute("foo");     // value of foo attribute
request.attribute("A", "V");  // sets value of attribute A to V
request.host();               // "example.com"
request.ip();                 // client IP address
request.pathInfo();           // the path info
request.params("foo");        // value of foo path parameter
request.params();             // map with all parameters
request.port();               // the server port
request.queryMap();           // the query map
request.queryMap("foo");      // query map for a certain parameter
request.queryParams("FOO");   // value of FOO query param
request.queryParams();        // the query param list
request.raw();                // raw request handed in by Jetty
request.requestMethod();      // The HTTP method (GET, ..etc)
request.scheme();             // "http"
request.session();            // session management
request.splat();              // splat (*) parameters
request.url();                // "http://example.com/foo"
request.userAgent();          // user agent

Response

In the handle method response information and functionality is provided by the response parameter:

response.body("Hello");        // sets content to Hello
response.header("FOO", "bar"); // sets header FOO with value bar
response.raw();                // raw response handed in by Jetty
response.redirect("/example"); // browser redirect to /example
response.status(401);          // set status code to 401
response.type("text/xml");     // set content type to text/xml

Query Maps

Query maps allows you to group parameters to a map by their prefix. This allows you to group two parameters like user[name] and user[age] to a user map.

request.queryMap().get("user", "name").value();
request.queryMap().get("user").get("name").value();
request.queryMap("user").get("age").integerValue();
request.queryMap("user").toMap();

Cookies

request.cookies();                              // get map of all request cookies
request.cookie("foo");                          // access request cookie by name
response.cookie("foo", "bar");                  // set cookie with a value
response.cookie("foo", "bar", 3600);            // set cookie with a max-age
response.cookie("foo", "bar", 3600, true);      // secure cookie
response.removeCookie("foo");                   // remove cookie

Sessions

Every request has access to the session created on the server side, provided with the following methods:

request.session(true)                            // create and return session
request.session().attribute("user")              // Get session attribute ‘user‘
request.session().attribute("user", "foo")       // Set session attribute ‘user‘
request.session().removeAttribute("user", "foo") // Remove session attribute ‘user‘
request.session().attributes()                   // Get all session attributes
request.session().id()                           // Get session id
request.session().isNew()                        // Check is session is new
request.session().raw()                          // Return servlet object

Halting

To immediately stop a request within a filter or route use:

halt();

You can also specify the status when halting:

halt(401);

Or the body:

halt("This is the body");

...or both:

halt(401, "Go away!");

Filters

Before filters are evaluated before each request and can read the request and read/modify the response. 
To stop execution, use halt:

before((request, response) -> {
    boolean authenticated;
    // ... check if authenticated
    if (!authenticated) {
        halt(401, "You are not welcome here");
    }
});

After filters are evaluated after each request and can read the request and read/modify the response:

after((request, response) -> {
    response.header("foo", "set by after filter");
});

Filters optionally take a pattern, causing them to be evaluated only if the request path matches that pattern:

before("/protected/*", (request, response) -> {
    // ... check if authenticated
    halt(401, "Go Away!");
});

Redirects

You can trigger a browser redirect with the redirect helper method:

response.redirect("/bar");

You can also trigger a browser redirect with specific http 3XX status code:

response.redirect("/bar", 301); // moved permanently

Exception Mapping

To handle exceptions of a configured type for all routes and filters:

get("/throwexception", (request, response) -> {
    throw new NotFoundException();
});

exception(NotFoundException.class, (e, request, response) -> {
    response.status(404);
    response.body("Resource not found");
});

Static Files

You can assign a folder in the classpath serving static files with the staticFileLocation method. Note that the public directory name is not included in the URL.
A file /public/css/style.css is made available as http://{host}:{port}/css/style.css

staticFileLocation("/public"); // Static files

You can also assign an external folder (not in the classpath) serving static files with the externalStaticFileLocation method.

externalStaticFileLocation("/var/www/public"); // Static files

ResponseTransformer

Mapped routes that transforms the output from the handle method. This is done by extending the ResponseTransformer and pass this to the mapping method. Example Of a route transforming output to JSON using Gson:

import com.google.gson.Gson;

public class JsonTransformer implements ResponseTransformer {

    private Gson gson = new Gson();

    @Override
    public String render(Object model) {
        return gson.toJson(model);
    }

}

and how it is used (MyMessage is a bean with one member ‘message‘):

get("/hello", "application/json", (request, response) -> {
    return new MyMessage("Hello World");
}, new JsonTransformer());

Views and Templates

A TemplateViewRoute is built up by a path (for url-matching) and the template engine holding the implementation of the ‘render‘ method. Instead of returning the result of calling toString() as body the TemplateViewRoute returns the result of calling render method.

The primary purpose of this kind of Route is to provide a way to create generic and reusable components for rendering output using a Template Engine.

Freemarker

Renders objects to HTML using the Freemarker template engine.

Maven dependency:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-template-freemarker</artifactId>
    <version>2.0.0</version>
</dependency>

Source: spark-template-freemarker.

Code example: spark-template-freemarker example.

Velocity

Renders objects to HTML using the Velocity template engine.

Maven dependency:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-template-velocity</artifactId>
    <version>2.0.0</version>
</dependency>

Source: spark-template-velocity.

Code example: spark-template-velocity example.

Mustache

Renders objects to HTML using the Mustache template engine.

Maven dependency:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-template-mustache</artifactId>
    <version>1.0.0</version>
</dependency>

Source: spark-template-mustache.

Code example: spark-template-mustache example.

Port

By default, Spark runs on port 4567. If you want to set another port use setPort. This has to be done before using routes and filters:

setPort(9090); // Spark will run on port 9090

Embedded webserver

Standalone Spark runs on an embedded Jetty web server.

Other webserver

To run Spark on a web server instead of standalone first of all an implementation of the interface spark.servlet.SparkApplication is needed. In the init() method the routes should be initialized. In your web.xml the following filter needs to be configured:

<filter>
    <filter-name>SparkFilter</filter-name>
    <filter-class>spark.servlet.SparkFilter</filter-class>
    <init-param>
        <param-name>applicationClass</param-name>
        <param-value>com.company.YourApplication</param-value>
    </init-param>
</filter>

<filter-mapping>
    <filter-name>SparkFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

Javadoc

After getting the source from GitHub run:

mvn javadoc:javadoc

The result is put in /target/site/apidocs

Examples

Examples can be found on the project‘s page on GitHub

时间: 2024-12-11 05:48:12

Spark Getting started的相关文章

基于Spark MLlib平台的协同过滤算法---电影推荐系统

基于Spark MLlib平台的协同过滤算法---电影推荐系统 又好一阵子没有写文章了,阿弥陀佛...最近项目中要做理财推荐,所以,回过头来回顾一下协同过滤算法在推荐系统中的应用. 说到推荐系统,大家可能立马会想到协同过滤算法.本文基于Spark MLlib平台实现一个向用户推荐电影的简单应用.其中,主要包括三部分内容: 协同过滤算法概述 基于模型的协同过滤应用---电影推荐 实时推荐架构分析     一.协同过滤算法概述 本人对算法的研究,目前还不是很深入,这里简单的介绍下其工作原理. 通常,

Spark SQL 之 Join 实现

原文地址:Spark SQL 之 Join 实现 Spark SQL 之 Join 实现 涂小刚 2017-07-19 217标签: spark , 数据库 Join作为SQL中一个重要语法特性,几乎所有稍微复杂一点的数据分析场景都离不开Join,如今Spark SQL(Dataset/DataFrame)已经成为Spark应用程序开发的主流,作为开发者,我们有必要了解Join在Spark中是如何组织运行的. SparkSQL总体流程介绍 在阐述Join实现之前,我们首先简单介绍SparkSQL

spark性能调优之资源调优

转https://tech.meituan.com/spark-tuning-basic.html spark作业原理 使用spark-submit提交一个Spark作业之后,这个作业就会启动一个对应的Driver进程.根据你使用的部署模式(deploy-mode)不同,Driver进程可能在本地启动,也可能在集群中某个工作节点上启动.Driver进程本身会根据我们设置的参数,占有一定数量的内存和CPU core.而Driver进程要做的第一件事情,就是向集群管理器(可以是Spark Stand

Spark 整合hive 实现数据的读取输出

实验环境: linux centOS 6.7 vmware虚拟机 spark-1.5.1-bin-hadoop-2.1.0 apache-hive-1.2.1 eclipse 或IntelJIDea 本次使用eclipse. 代码: import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.sql.DataFrame; import o

spark 教程三 spark Map filter flatMap union distinct intersection操作

RDD的创建 spark 所有的操作都围绕着弹性分布式数据集(RDD)进行,这是一个有容错机制的并可以被并行操作的元素集合,具有只读.分区.容错.高效.无需物化.可以缓存.RDD依赖等特征 RDD的创建基础RDD 1.并行集合(Parallelized Collections):接收一个已经存在的Scala集合,然后进行各种并行运算 var sc=new SparkContext(conf) var rdd=sc.parallelize(Array(2,4,9,3,5,7,8,1,6)); rd

Spark运行命令示例

local单机模式:结果xshell可见:./bin/spark-submit --class org.apache.spark.examples.SparkPi --master local[1] ./lib/spark-examples-1.3.1-hadoop2.4.0.jar 100 standalone集群模式:需要的配置项1, slaves文件2, spark-env.shexport JAVA_HOME=/usr/soft/jdk1.7.0_71export SPARK_MASTE

Spark Job具体的物理执行

即使采用pipeline的方式,函数f对依赖的RDD中的数据集合的操作也会有两种方式: 1.f(record),f作用于集合的每一条记录,每次只作用于一条记录 2.f(records),f一次性作用于集合的全部数据: Spark采用的是第一种方式,因为: 1.无需等待,可以最大化的使用集群的计算资源 2.减少OOM的产生 3.最大化的有利于并发 4.可以精准的控制每一个Partition本身(Dependency)及其内部的计算(compute) 5.基于lineage的算子流动式函数式计算,可

Dataflow编程模型和spark streaming结合

Dataflow编程模型和spark streaming结合 主要介绍一下Dataflow编程模型的基本思想,后面再简单比较一下Spark  streaming的编程模型 == 是什么 == 为用户提供以流式或批量模式处理海量数据的能力,该服务的编程接口模型(或者说计算框架)也就是下面要讨论的dataflow model 流式计算框架处理框架很多,也有大量的模型/框架号称能较好的处理流式和批量计算场景,比如Lambda模型,比如Spark等等,那么dataflow模型有什么特别的呢? 这就要要从

Spark性能优化指南——高级篇

Spark性能优化指南--高级篇 [TOC] 前言 继基础篇讲解了每个Spark开发人员都必须熟知的开发调优与资源调优之后,本文作为<Spark性能优化指南>的高级篇,将深入分析数据倾斜调优与shuffle调优,以解决更加棘手的性能问题. 数据倾斜调优 调优概述 有的时候,我们可能会遇到大数据计算中一个最棘手的问题--数据倾斜,此时Spark作业的性能会比期望差很多.数据倾斜调优,就是使用各种技术方案解决不同类型的数据倾斜问题,以保证Spark作业的性能. 数据倾斜发生时的现象 绝大多数tas

【Spark深入学习 -14】Spark应用经验与程序调优

----本节内容------- 1.遗留问题解答 2.Spark调优初体验 2.1 利用WebUI分析程序瓶颈 2.2 设置合适的资源 2.3 调整任务的并发度 2.4 修改存储格式 3.Spark调优经验 3.1 Spark原理及调优工具 3.2 运行环境优化 3.2.1 防止不必要的分发 3.2.2 提高数据本地性 3.2.3 存储格式选择 3.2.4 选择高配机器 3.3 优化操作符 3.3.1 过滤操作导致多小任务 3.3.2 降低单条记录开销 3.3.3 处理数据倾斜或者任务倾斜 3.