Despite significant growth in recent years, the availability of 3D content is still dwarfed by that of its 2D counterpart. To close this gap, many methods of converting images and video from 2D to 3D have been proposed. Methods involving human operators have been more successful, but they are also time-consuming and costly. Automatic methods, which usually use a deterministic 3D scene model, have not yet reached the same level of quality as they are based on assumptions often violated in practice. In this paper, we propose a new class of methods that are based on the radically different approach of learning 2D to 3D conversion from examples. We develop two types of methods. The first is to learn a mapping of points from local image / video attributes, such as color, spatial position and, in the case of video, the movement in each pixel, to the depth of scene in that pixel using an idea of regression type. The second method is based on estimating globally the entire depth map of a query image directly from a 3D image repository (image + pairs of depth or stereo) using an idea of the nearest neighbor regression type. We demonstrate both the efficacy and computational efficiency of our methods in numerous 2D images and analyze their disadvantages and benefits. Although far from perfect, our results demonstrate that 3D content repositories can be used for effective 2D-to-3D image conversion. An extension of the video is immediate by enforcing the temporal continuity of the calculated depth maps.