PIC2, Kernel Density Estimates

核函数估计

Demo I

import sys,re,os
import numpy as np
from scipy import stats
import matplotlib.pylab as plt

if __name__ == ‘__main__‘:
    # random data
    grade = [np.random.rand(100) * 100]
    fig = plt.figure()

    # KDE
    ax1 = fig.add_subplot(211)
    ind = np.arange(0.,100.,1)
    gkde = stats.kde.gaussian_kde(grade, bw_method = ‘scott‘)
    ax1.plot(ind, gkde(ind), label=‘Gods\‘ Grade‘, color="g")
    ax1.set_title(‘Kernel Density Estimation‘)
    ax1.legend()

    # hisogram
    ax2 = fig.add_subplot(212)
    ax2.hist(grade, 100, range = (0,100), normed = True)

    plt.show()

Demo II

scikit-learn中的demo

http://scikit-learn.org/stable/auto_examples/neighbors/plot_kde_1d.html

# -*- coding: utf-8 -*-
"""
Created on Wed Oct 22 20:38:13 2014

@author: dell
"""

# Author: Jake Vanderplas <[email protected]>
#
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn.neighbors import KernelDensity

#----------------------------------------------------------------------
# Plot the progression of histograms to kernels
np.random.seed(1)
N = 20
X = np.concatenate((np.random.normal(0, 1, 0.3 * N),
                    np.random.normal(5, 1, 0.7 * N)))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
bins = np.linspace(-5, 10, 10)

fig, ax = plt.subplots(2, 2, sharex=True, sharey=True)
fig.subplots_adjust(hspace=0.05, wspace=0.05)

# histogram 1
ax[0, 0].hist(X[:, 0], bins=bins, fc=‘#AAAAFF‘, normed=True)
ax[0, 0].text(-3.5, 0.31, "Histogram")

# histogram 2
ax[0, 1].hist(X[:, 0], bins=bins + 0.75, fc=‘#AAAAFF‘, normed=True)
ax[0, 1].text(-3.5, 0.31, "Histogram, bins shifted")

# tophat KDE
kde = KernelDensity(kernel=‘tophat‘, bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 0].fill(X_plot[:, 0], np.exp(log_dens), fc=‘#AAAAFF‘)
ax[1, 0].text(-3.5, 0.31, "Tophat Kernel Density")

# Gaussian KDE
kde = KernelDensity(kernel=‘gaussian‘, bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 1].fill(X_plot[:, 0], np.exp(log_dens), fc=‘#AAAAFF‘)
ax[1, 1].text(-3.5, 0.31, "Gaussian Kernel Density")

for axi in ax.ravel():
    axi.plot(X[:, 0], np.zeros(X.shape[0]) - 0.01, ‘+k‘)
    axi.set_xlim(-4, 9)
    axi.set_ylim(-0.02, 0.34)

for axi in ax[:, 0]:
    axi.set_ylabel(‘Normalized Density‘)

for axi in ax[1, :]:
    axi.set_xlabel(‘x‘)

#----------------------------------------------------------------------
# Plot all available kernels
X_plot = np.linspace(-6, 6, 1000)[:, None]
X_src = np.zeros((1, 1))

fig, ax = plt.subplots(2, 3, sharex=True, sharey=True)
fig.subplots_adjust(left=0.05, right=0.95, hspace=0.05, wspace=0.05)

def format_func(x, loc):
    if x == 0:
        return ‘0‘
    elif x == 1:
        return ‘h‘
    elif x == -1:
        return ‘-h‘
    else:
        return ‘%ih‘ % x

for i, kernel in enumerate([‘gaussian‘, ‘tophat‘, ‘epanechnikov‘,
                            ‘exponential‘, ‘linear‘, ‘cosine‘]):
    axi = ax.ravel()[i]
    log_dens = KernelDensity(kernel=kernel).fit(X_src).score_samples(X_plot)
    axi.fill(X_plot[:, 0], np.exp(log_dens), ‘-k‘, fc=‘#AAAAFF‘)
    axi.text(-2.6, 0.95, kernel)

    axi.xaxis.set_major_formatter(plt.FuncFormatter(format_func))
    axi.xaxis.set_major_locator(plt.MultipleLocator(1))
    axi.yaxis.set_major_locator(plt.NullLocator())

    axi.set_ylim(0, 1.05)
    axi.set_xlim(-2.9, 2.9)

ax[0, 1].set_title(‘Available Kernels‘)

#----------------------------------------------------------------------
# Plot a 1D density example
N = 100
np.random.seed(1)
X = np.concatenate((np.random.normal(0, 1, 0.3 * N),
                    np.random.normal(5, 1, 0.7 * N)))[:, np.newaxis]

X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]

true_dens = (0.3 * norm(0, 1).pdf(X_plot[:, 0])
             + 0.7 * norm(5, 1).pdf(X_plot[:, 0]))

fig, ax = plt.subplots()
ax.fill(X_plot[:, 0], true_dens, fc=‘black‘, alpha=0.2,
        label=‘input distribution‘)

for kernel in [‘gaussian‘, ‘tophat‘, ‘epanechnikov‘]:
    kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
    log_dens = kde.score_samples(X_plot)
    ax.plot(X_plot[:, 0], np.exp(log_dens), ‘-‘,
            label="kernel = ‘{0}‘".format(kernel))

ax.text(6, 0.38, "N={0} points".format(N))

ax.legend(loc=‘upper left‘)
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), ‘+k‘)

ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
时间: 2024-09-29 16:28:19

PIC2, Kernel Density Estimates的相关文章

More 3D Graphics (rgl) for Classification with Local Logistic Regression and Kernel Density Estimates (from The Elements of Statistical Learning)(转)

This post builds on a previous post, but can be read and understood independently. As part of my course on statistical learning, we created 3D graphics to foster a more intuitive understanding of the various methods that are used to relax the assumpt

非参数密度估计(直方图与核密度估计)

主要讲述直方图与kernel density estimation,参考维基百科中的经典论述,从直方图和核密度估计的实现对比来说明这两种经典的非参数密度估计方法,具体的细节不做深入剖析. In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. Kernel density est

KDD2015,Accepted Papers

Accepted Papers by Session Research Session RT01: Social and Graphs 1Tuesday 10:20 am–12:00 pm | Level 3 – Ballroom AChair: Tanya Berger-Wolf Efficient Algorithms for Public-Private Social NetworksFlavio Chierichetti,Sapienza University of Rome; Ales

《R in Nutshell》 读书笔记(连载)

R in Nutshell 前言 例子(nutshell包) 本书中的例子包括在nutshell的R包中,使用数据,需加载nutshell包 install.packages("nutshell") 第一部分:基础 第一章 批处理(Batch Mode) R provides a way to run a large set of commands in sequence and save the results to a file. 以batch mode运行R的一种方式是:使用系统

Matplotlib学习---用matplotlib画直方图/密度图(histogram, density plot)

直方图用于展示数据的分布情况,x轴是一个连续变量,y轴是该变量的频次. 下面利用Nathan Yau所著的<鲜活的数据:数据可视化指南>一书中的数据,学习画图. 数据地址:http://datasets.flowingdata.com/crimeRatesByState2005.csv 以下是这个数据文件的前5行: state murder forcible_rape robbery aggravated_assault 0 United States 5.6 31.7 140.7 291.1

scikit-learn:class and function reference(看看你到底掌握了多少。。)

http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition Reference This is the class and function reference of scikit-learn. Please refer to the full user guide for further details, as the class and function raw specifications

pandas.DataFrame.plot

pandas.DataFrame.plot¶ DataFrame.plot(x=None, y=None, kind='line', ax=None, subplots=False, sharex=None, sharey=False, layout=None, figsize=None, use_index=True, title=None, grid=None, legend=True, style=None, logx=False, logy=False, loglog=False, xt

Pandas Api 不完全翻译

原文地址 http://pandas.pydata.org/pandas-docs/stable/api.html API Reference Input/Output Pickling read_pickle(path) Load pickled pandas object (or any other pickled object) from the specified Flat File read_table(filepath_or_buffer[, sep, ...]) Read gene

Awesome Machine Learning

Awesome Machine Learning  A curated list of awesome machine learning frameworks, libraries and software (by language). Inspired by awesome-php. If you want to contribute to this list (please do), send me a pull request or contact me @josephmisiti Als