關(guān)于我們
書單推薦
新書推薦
|
計(jì)算機(jī)視覺(jué)——一種現(xiàn)代方法(第二版)(英文版)
計(jì)算機(jī)視覺(jué)是研究如何使人工系統(tǒng)從圖像或多維數(shù)據(jù)中"感知”的科學(xué)。本書是計(jì)算機(jī)視覺(jué)領(lǐng)域的經(jīng)典教材,內(nèi)容涉及幾何攝像模型、光照及陰影、顏色、線性濾波、局部圖像特征、紋理、立體視覺(jué)運(yùn)動(dòng)結(jié)構(gòu)、聚類分割、組合與模型擬合、跟蹤、配準(zhǔn)、平滑表面與輪廓、深度數(shù)據(jù)、圖像分類、對(duì)象檢測(cè)與識(shí)別、基于圖像的建模與渲染、人形研究、圖像搜索與檢索、優(yōu)化技術(shù)等內(nèi)容。與前一版相比,本書簡(jiǎn)化了部分主題,增加了應(yīng)用示例,重寫了關(guān)于現(xiàn)代特性的內(nèi)容,詳述了現(xiàn)代圖像編輯技術(shù)與對(duì)象識(shí)別技術(shù)。
*數(shù)學(xué)知識(shí)簡(jiǎn)潔,清晰
*關(guān)于現(xiàn)代特征的內(nèi)容 *現(xiàn)代圖像編輯技術(shù)以及物體識(shí)別技術(shù)
David Forsyth:1984年于威特沃特斯蘭德大學(xué)取得了電氣工程學(xué)士學(xué)位,1986年取得電氣工程碩士學(xué)位,1989年于牛津貝列爾學(xué)院取得博士學(xué)位。之后在艾奧瓦大學(xué)任教3年,在加州大學(xué)伯克利分校任教10年,再后在伊利諾伊大學(xué)任教。2000年和2001年任IEEE計(jì)算機(jī)視覺(jué)與模式識(shí)別會(huì)議(CVPR)執(zhí)行副主席,2006年任CVPR常任副主席,2008年任歐洲計(jì)算機(jī)視覺(jué)會(huì)議執(zhí)行副主席,是所有關(guān)于計(jì)算機(jī)視覺(jué)主要國(guó)際會(huì)議的常任執(zhí)委會(huì)成員。他為SIGGRAPH執(zhí)委會(huì)工作了5期。2006年獲IEEE技術(shù)成就獎(jiǎng),2009年成為IEEE會(huì)士。
Jean Ponce:于1988年在巴黎奧賽大學(xué)獲得計(jì)算機(jī)科學(xué)博士學(xué)位。1990年至2005年,作為研究科學(xué)家分別供職于法國(guó)國(guó)家信息研究所、麻省理工學(xué)院人工智能實(shí)驗(yàn)室和斯坦福大學(xué)機(jī)器人實(shí)驗(yàn)室;1990年至2005年,供職于伊利諾伊大學(xué)計(jì)算機(jī)科學(xué)系。2005年開(kāi)始,成為法國(guó)巴黎高等師范學(xué)校教授。
i image formation 1
1 geometric camera models 3 1.1 image formation 4 1.1.1 pinhole perspective 4 1.1.2 weak perspective 6 1.1.3 cameras with lenses 8 1.1.4 the human eye 12 1.2 intrinsic and extrinsic parameters 14 1.2.1 rigid transformations and homogeneous coordinates 14 1.2.2 intrinsic parameters 16 1.2.3 extrinsic parameters 18 1.2.4 perspective projection matrices 19 1.2.5 weak-perspective projection matrices 20 1.3 geometric camera calibration 22 1.3.1 alinear approach to camera calibration 23 1.3.2 anonlinear approach to camera calibration 27 1.4 notes 29 2 light and shading 32 2.1 modelling pixel brightness 32 2.1.1 reflection at surfaces 33 2.1.2 sources and their effects 34 2.1.3 the lambertian+specular model 36 2.1.4 area sources 36 2.2 inference from shading 37 2.2.1 radiometric calibration and high dynamic range images 38 2.2.2 the shape of specularities 40 2.2.3 inferring lightness and illumination 43 2.2.4 photometric stereo: shape from multiple shaded images 46 2.3 modelling interreflection 52 2.3.1 the illumination at a patch due to an area source 52 2.3.2 radiosity and exitance 54 2.3.3 an interreflection model 55 2.3.4 qualitative properties of interreflections 56 2.4 shape from one shaded image 59 2.5 notes 61 3 color 68 3.1 human color perception 68 3.1.1 color matching 68 3.1.2 color receptors 71 3.2 the physics of color 73 3.2.1 the color of light sources 73 3.2.2 the color of surfaces 76 3.3 representing color 77 3.3.1 linear color spaces 77 3.3.2 non-linear color spaces 83 3.4 amodel of image color 86 3.4.1 the diffuse term 88 3.4.2 the specular term 90 3.5 inference from color 90 3.5.1 finding specularities using color 90 3.5.2 shadow removal using color 92 3.5.3 color constancy: surface color from image color 95 3.6 notes 99 ii early vision: just one image 105 4 linear filters 107 4.1 linear filters and convolution 107 4.1.1 convolution 107 4.2 shift invariant linear systems 112 4.2.1 discrete convolution 113 4.2.2 continuous convolution 115 4.2.3 edge effects in discrete convolutions 118 4.3 spatial frequency and fourier transforms 118 4.3.1 fourier transforms 119 4.4 sampling and aliasing 121 4.4.1 sampling 122 4.4.2 aliasing 125 4.4.3 smoothing and resampling 126 4.5 filters as templates 131 4.5.1 convolution as a dot product 131 4.5.2 changing basis 132 4.6 technique: normalized correlation and finding patterns 132 4.6.1 controlling the television by finding hands by normalized correlation 133 4.7 technique: scale and image pyramids 134 4.7.1 the gaussian pyramid 135 4.7.2 applications of scaled representations 136 4.8 notes 137 5 local image features 141 5.1 computing the image gradient 141 5.1.1 derivative of gaussian filters 142 5.2 representing the image gradient 144 5.2.1 gradient-based edge detectors 145 5.2.2 orientations 147 5.3 finding corners and building neighborhoods 148 5.3.1 finding corners 149 5.3.2 using scale and orientation to build a neighborhood 151 5.4 describing neighborhoods with sift and hog features 155 5.4.1 sift features 157 5.4.2 hog features 159 5.5 computing local features in practice 160 5.6 notes 160 6 texture 164 6.1 local texture representations using filters 166 6.1.1 spots and bars 167 6.1.2 from filter outputs to texture representation 168 6.1.3 local texture representations in practice 170 6.2 pooled texture representations by discovering textons 171 6.2.1 vector quantization and textons 172 6.2.2 k-means clustering for vector quantization 172 6.3 synthesizing textures and filling holes in images 176 6.3.1 synthesis by sampling local models 176 6.3.2 filling in holes in images 179 6.4 image denoising 182 6.4.1 non-local means 183 6.4.2 block matching 3d (bm3d) 183 6.4.3 learned sparse coding 184 6.4.4 results 186 6.5 shape from texture 187 6.5.1 shape from texture for planes 187 6.5.2 shape from texture for curved surfaces 190 6.6 notes 191 iii early vision: multiple images 195 7 stereopsis 197 7.1 binocular camera geometry and the epipolar constraint 198 7.1.1 epipolar geometry 198 7.1.2 the essential matrix 200 7.1.3 the fundamental matrix 201 7.2 binocular reconstruction 201 7.2.1 image rectification 202 7.3 human stereopsis 203 7.4 local methods for binocular fusion 205 7.4.1 correlation 205 7.4.2 multi-scale edge matching 207 7.5 global methods for binocular fusion 210 7.5.1 ordering constraints and dynamic programming 210 7.5.2 smoothness and graphs 211 7.6 using more cameras 214 7.7 application: robot navigation 215 7.8 notes 216 8 structure from motion 221 8.1 internally calibrated perspective cameras 221 8.1.1 natural ambiguity of the problem 223 8.1.2 euclidean structure and motion from two images 224 8.1.3 euclidean structure and motion from multiple images 228 8.2 uncalibrated weak-perspective cameras 230 8.2.1 natural ambiguity of the problem 231 8.2.2 affine structure and motion from two images 233 8.2.3 affine structure and motion from multiple images 237 8.2.4 from affine to euclidean shape 238 8.3 uncalibrated perspective cameras 240 8.3.1 natural ambiguity of the problem 241 8.3.2 projective structure and motion from two images 242 8.3.3 projective structure and motion from multiple images 244 8.3.4 from projective to euclidean shape 246 8.4 notes 248 iv mid-level vision 253 9 segmentation by clustering 255 9.1 human vision: grouping and gestalt 256 9.2 important applications 261 9.2.1 background subtraction 261 9.2.2 shot boundary detection 264 9.2.3 interactive segmentation 265 9.2.4 forming image regions 266 9.3 image segmentation by clustering pixels 268 9.3.1 basic clustering methods 269 9.3.2 the watershed algorithm 271 9.3.3 segmentation using k-means 272 9.3.4 mean shift: finding local modes in data 273 9.3.5 clustering and segmentation with mean shift 275 9.4 segmentation, clustering, and graphs 277 9.4.1 terminology and facts for graphs 277 9.4.2 agglomerative clustering with a graph 279 9.4.3 divisive clustering with a graph 281 9.4.4 normalized cuts 284 9.5 image segmentation in practice 285 9.5.1 evaluating segmenters 286 9.6 notes 287 10 grouping and model fitting 290 10.1 the hough transform 290 10.1.1 fitting lines with the hough transform 290 10.1.2 using the hough transform 292 10.2 fitting lines and planes 293 10.2.1 fitting a single line 294 10.2.2 fitting planes 295 10.2.3 fitting multiple lines 296 10.3 fitting curved structures 297 10.4 robustness 299 10.4.1 m-estimators 300 10.4.2 ransac: searching for good points 302 10.5 fitting using probabilistic models 306 10.5.1 missing data problems 307 10.5.2 mixture models and hidden variables 309 10.5.3 the em algorithm for mixture models 310 10.5.4 difficulties with the em algorithm 312 10.6 motion segmentation by parameter estimation 313 10.6.1 optical flow and motion 315 10.6.2 flow models 316 10.6.3 motion segmentation with layers 317 10.7 model selection: which model is the best fit? 319 10.7.1 model selection using cross-validation 322 10.8 notes 322 11 tracking 326 11.1 simple tracking strategies 327 11.1.1 tracking by detection 327 11.1.2 tracking translations by matching 330 11.1.3 using affine transformations to confirm a match 332 11.2 tracking using matching 334 11.2.1 matching summary representations 335 11.2.2 tracking using flow 337 11.3 tracking linear dynamical models with kalman filters 339 11.3.1 linear measurements and linear dynamics 340 11.3.2 the kalman filter 344 11.3.3 forward-backward smoothing 345 11.4 data association 349 11.4.1 linking kalman filters with detection methods 349 11.4.2 key methods of data association 350 11.5 particle filtering 350 11.5.1 sampled representations of probability distributions 351 11.5.2 the simplest particle filter 355 11.5.3 the tracking algorithm 356 11.5.4 a workable particle filter 358 11.5.5 practical issues in particle filters 360 11.6 notes 362 v high-level vision 365 12 registration 367 12.1 registering rigid objects 368 12.1.1 iterated closest points 368 12.1.2 searching for transformations via correspondences 369 12.1.3 application: building image mosaics 370 12.2 model-based vision: registering rigid objects with projection 375 12.2.1 verification: comparing transformed and rendered source to target 377 12.3 registering deformable objects 378 12.3.1 deforming texture with active appearance models 378 12.3.2 active appearance models in practice 381 12.3.3 application: registration in medical imaging systems 383 12.4 notes 388 13 smooth surfaces and their outlines 391 13.1 elements of differential geometry 393 13.1.1 curves 393 13.1.2 surfaces 397 13.2 contour geometry 402 13.2.1 the occluding contour and the image contour 402 13.2.2 the cusps and inflections of the image contour 403 13.2.3 koenderink’s theorem 404 13.3 visual events: more differential geometry 407 13.3.1 the geometry of the gauss map 407 13.3.2 asymptotic curves 409 13.3.3 the asymptotic spherical map 410 13.3.4 local visual events 412 13.3.5 the bitangent ray manifold 413 13.3.6 multilocal visual events 414 13.3.7 the aspect graph 416 13.4 notes 417 14 range data 422 14.1 active range sensors 422 14.2 range data segmentation 424 14.2.1 elements of analytical differential geometry 424 14.2.2 finding step and roof edges in range images 426 14.2.3 segmenting range images into planar regions 431 14.3 range image registration and model acquisition 432 14.3.1 quaternions 433 14.3.2 registering range images 434 14.3.3 fusing multiple range images 436 14.4 object recognition 438 14.4.1 matching using interpretation trees 438 14.4.2 matching free-form surfaces using spin images 441 14.5 kinect 446 14.5.1 features 447 14.5.2 technique: decision trees and random forests 448 14.5.3 labeling pixels 450 14.5.4 computing joint positions 453 14.6 notes 453 15 learning to classify 457 15.1 classification, error, and loss 457 15.1.1 using loss to determine decisions 457 15.1.2 training error, test error, and overfitting 459 15.1.3 regularization 460 15.1.4 error rate and cross-validation 463 15.1.5 receiver operating curves 465 15.2 major classification strategies 467 15.2.1 example: mahalanobis distance 467 15.2.2 example: class-conditional histograms and naive bayes 468 15.2.3 example: classification using nearest neighbors 469 15.2.4 example: the linear support vector machine 470 15.2.5 example: kernel machines 473 15.2.6 example: boosting and adaboost 475 15.3 practical methods for building classifiers 475 15.3.1 manipulating training data to improve performance 477 15.3.2 building multi-class classifiers out of binary classifiers 479 15.3.3 solving for svms and kernel machines 480 15.4 notes 481 16 classifying images 482 16.1 building good image features 482 16.1.1 example applications 482 16.1.2 encoding layout with gist features 485 16.1.3 summarizing images with visual words 487 16.1.4 the spatial pyramid kernel 489 16.1.5 dimension reduction with principal components 493 16.1.6 dimension reduction with canonical variates 494 16.1.7 example application: identifying explicit images 498 16.1.8 example application: classifying materials 502 16.1.9 example application: classifying scenes 502 16.2 classifying images of single objects 504 16.2.1 image classification strategies 505 16.2.2 evaluating image classification systems 505 16.2.3 fixed sets of classes 508 16.2.4 large numbers of classes 509 16.2.5 flowers, leaves, and birds: some specialized problems 511 16.3 image classification in practice 512 16.3.1 codes for image features 513 16.3.2 image classification datasets 513 16.3.3 dataset bias 515 16.3.4 crowdsourcing dataset collection 515 16.4 notes 517 17 detecting objects in images 519 17.1 the sliding window method 519 17.1.1 face detection 520 17.1.2 detecting humans 525 17.1.3 detecting boundaries 527 17.2 detecting deformable objects 530 17.3 the state of the art of object detection 535 17.3.1 datasets and resources 538 17.4 notes 539 18 topics in object recognition 540 18.1 what should object recognition do? 540 18.1.1 what should an object recognition system do? 540 18.1.2 current strategies for object recognition 542 18.1.3 what is categorization? 542 18.1.4 selection: what should be described? 544 18.2 feature questions 544 18.2.1 improving current image features 544 18.2.2 other kinds of image feature 546 18.3 geometric questions 547 18.4 semantic questions 549 18.4.1 attributes and the unfamiliar 550 18.4.2 parts, poselets and consistency 551 18.4.3 chunks of meaning 554 vi applications and topics 557 19 image-based modeling and rendering 559 19.1 visual hulls 559 19.1.1 main elements of the visual hull model 561 19.1.2 tracing intersection curves 563 19.1.3 clipping intersection curves 566 19.1.4 triangulating cone strips 567 19.1.5 results 568 19.1.6 going further: carved visual hulls 572 19.2 patch-based multi-view stereopsis 573 19.2.1 main elements of the pmvs model 575 19.2.2 initial feature matching 578 19.2.3 expansion 579 19.2.4 filtering 580 19.2.5 results 581 19.3 the light field 584 19.4 notes 587 20 looking at people 590 20.1 hmm’s, dynamic programming, and tree-structured models 590 20.1.1 hidden markov models 590 20.1.2 inference for an hmm 592 20.1.3 fitting an hmm with em 597 20.1.4 tree-structured energy models 600 20.2 parsing people in images 602 20.2.1 parsing with pictorial structure models 602 20.2.2 estimating the appearance of clothing 604 20.3 tracking people 606 20.3.1 why human tracking is hard 606 20.3.2 kinematic tracking by appearance 608 20.3.3 kinematic human tracking using templates 609 20.4 3d from 2d: lifting 611 20.4.1 reconstruction in an orthographic view 611 20.4.2 exploiting appearance for unambiguous reconstructions 613 20.4.3 exploiting motion for unambiguous reconstructions 615 20.5 activity recognition 617 20.5.1 background: human motion data 617 20.5.2 body configuration and activity recognition 621 20.5.3 recognizing human activities with appearance features 622 20.5.4 recognizing human activities with compositional models 624 20.6 resources 624 20.7 notes 626 21 image search and retrieval 627 21.1 the application context 627 21.1.1 applications 628 21.1.2 user needs 629 21.1.3 types of image query 630 21.1.4 what users do with image collections 631 21.2 basic technologies from information retrieval 632 21.2.1 word counts 632 21.2.2 smoothing word counts 633 21.2.3 approximate nearest neighbors and hashing 634 21.2.4 ranking documents 638 21.3 images as documents 639 21.3.1 matching without quantization 640 21.3.2 ranking image search results 641 21.3.3 browsing and layout 643 21.3.4 laying out images for browsing 644 21.4 predicting annotations for pictures 645 21.4.1 annotations from nearby words 646 21.4.2 annotations from the whole image 646 21.4.3 predicting correlated words with classifiers 648 21.4.4 names and faces 649 21.4.5 generating tags with segments 651 21.5 the state of the art of word prediction 654 21.5.1 resources 655 21.5.2 comparing methods 655 21.5.3 open problems 656 21.6 notes 659 vii background material 661 22 optimization techniques 663 22.1 linear least-squares methods 663 22.1.1 normal equations and the pseudoinverse 664 22.1.2 homogeneous systems and eigenvalue problems 665 22.1.3 generalized eigenvalues problems 666 22.1.4 an example: fitting a line to points in a plane 666 22.1.5 singular value decomposition 667 22.2 nonlinear least-squares methods 669 22.2.1 newton’s method: square systems of nonlinear equations670 22.2.2 newton’s method for overconstrained systems 670 22.2.3 the gauss―newton and levenberg―marquardt algorithms 671 22.3 sparse coding and dictionary learning 672 22.3.1 sparse coding 672 22.3.2 dictionary learning 673 22.3.3 supervised dictionary learning 675 22.4 min-cut/max-flow problems and combinatorial optimization 675 22.4.1 min-cut problems 676 22.4.2 quadratic pseudo-boolean functions 677 22.4.3 generalization to integer variables 679 22.5 notes 682 index 684 list of algorithms 707
你還可能感興趣
我要評(píng)論
|