2. Building Abstractions with Data

程序的构造:

将数据抽象(data abstraction), 将操作data的过程用函数抽象(function abstraction)

abstraction barriers

访问数据的某些属性时就用相应的方法,而不是从数据的构造方法入手

sequence processing:

  • sequence iteration
  • list comprehensions
  • aggregation: aggregate all values in a sequence into a single value. The built-in functions sum, min, and max are all examples of aggregation functions.
  • higher-order functions

Closure property: a method for combining data values has a closure property if the result of combination can itself be combined using the same method. Closure is the key to power in any means of combination because it permits us to create hierarchical structures — structures made up of parts, which themselves are made up of parts, and so on.

Dispatch function:a general method for implementing a message passing interface for abstract data. The function is a dispatch function and its arguments are first a message, followed by additional arguments to parameterize that method. This message is a string naming what the function should do. Dispatch functions are effectively many functions in one: the message determines the behavior of the function, and the additional arguments are used in that behavior.

Generic function:a function that can accept values of multiple different types. We will consider three different techniques for implementing generic functions

  • shared interfaces
  • Type dispatching: Look up a cross-type implementation of an operation based on the types of its arguments
  • Type coercion: Look up a function for converting one type to another, then apply a type-specific implementation.

count how many times function calls

def count(f):
    def counted(*args):
        counted.call_count += 1
        return f(*args)
    counted.call_count = 0
    return counted

count how many frames are active

def count_frames(f):
    def counted(*args):
        counted.open_count += 1
        counted.max_count = max(counted.max_count, counted.open_count)
        result = f(*args)
        counted.open_count -= 1
        return result
    counted.open_count = 0
    counted.max_count = 0
    return counted

Memoization(using cache)

def memo(f):
    cache = {}
    def memoized(n):
        if n not in cache:
            cache[n] = f(n)
        return cache[n]
    return memoized

Lexical scope:
The parent of a frame is the environment in which a procedure was defined
Dynamic scope:
The parent of a frame is the environment in which a procedure was called

tail recursion
一般递归花费时间与迭代相同, 但花费内存远大于迭代
tail recursion 就是不断的将用不到的frame舍弃

pdf中一个非tail context 转化为 tail context
(recursion is the last thing in your procudure are tail context) #也给了我们什么时候用递归 什么时候用迭代方便的思路(所有可以用迭代的都可以写成tail recursion)

2015-07-18

时间: 2024-11-05 05:10:54

2. Building Abstractions with Data的相关文章

Building the Unstructured Data Warehouse: Architecture, Analysis, and Design

Building the Unstructured Data Warehouse: Architecture, Analysis, and Design earn essential techniques from data warehouse legend Bill Inmon on how to build the reporting environment your business needs now! Answers for many valuable business questio

SICP -- Building Abstractions With Procedures

;; Building Abstractions With Procedures ( define ( my-sqrt x ) ( define ( good-enough? guess ) ( < ( abs ( - ( square guess ) x ) ) 0.001 ) ) ( define ( improve guess ) ( average guess ( / x guess ) ) ) ( define ( sqrt-iter guess ) ( if ( good-enoug

Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform-part 1

转自: http://www.confluent.io/blog/stream-data-platform-1/ These days you hear a lot about "stream processing", "event data", and "real-time", often related to technologies like Kafka, Storm, Samza, or Spark's Streaming module.

1 Building Abstractions with Functions

Every powerful language has three such mechanisms: primitive expressions and statements, which represent the simplest building blocks that the language provides, means of combination, by which compound elements are built from simpler ones, and means

Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform-part 2

转自: http://confluent.io/blog/stream-data-platform-2          http://www.infoq.com/cn/news/2015/03/apache-kafka-stream-data-advice/ 在<流数据平台构建实战指南>第一部分中,Confluent联合创始人Jay Kreps介绍了如何构建一个公司范围的实时流数据中心.InfoQ前期对此进行过报道.本文是根据第二部分整理而成.在这一部分中,Jay给出了一些构建数据流平台的具

Seven Python Tools All Data Scientists Should Know How to Use

Seven Python Tools All Data Scientists Should Know How to Use If you’re an aspiring data scientist, you’re inquisitive – always exploring, learning, and asking questions. Online tutorials and videos can help you prepare you for your first role, but t

Create Entity Data Model

http://www.entityframeworktutorial.net/EntityFramework5/create-dbcontext-in-entity-framework5.aspx Here, we are going to create an Entity Data Model (EDM) for SchoolDB database and understand the basic building blocks. Entity Data Model is a model th

Writing a Foreign Data Wrapper

转自:greenplum 官方的一片文档 https://gpdb.docs.pivotal.io/6-0/admin_guide/external/g-devel-fdw.html pg 是类似的 This chapter outlines how to write a new foreign-data wrapper. All operations on a foreign table are handled through its foreign-data wrapper (FDW), a

斯坦福CS课程列表

http://exploredegrees.stanford.edu/coursedescriptions/cs/ CS 101. Introduction to Computing Principles. 3-5 Units. Introduces the essential ideas of computing: data representation, algorithms, programming "code", computer hardware, networking, s