latest news   •   about us    •   programme   •   membership   •   archives   •  CREATE Forum  •   CREATE interactive   •   links   •   contact      home  

high contrast view     standard view
programme...

Collaborate - Innovate - Create
Managing Colour in Digital Processes & the Arts

Abstracts:
Below are the abstracts for the conference. More will be added as they become available.

Abstract : Human Visual Perception and Spatial Models of Color
Alessandro Rizzi
Università Degli Studi Di Milano
There is a growing family of algorithms that treat/modify/enhance color information in its visual context, also known as spatial color methods. The goal that leads the computation of these models can result in very different results and can make even more different the way to judge them. In fact, judging their performance is a challenging task and still an open problem.

A discussion about the possible goals is presented. Moreover, regardless the approach, two main variables affect the final result of these algorithms: their parameters and the visual characteristics of the input image. This talk does not deal with parameter tuning, but aims at discussing the visual configurations in which a spatial color method show interesting or critical behavior.

A survey of the more significant visual configurations will be presented and discussed. The discussion will present strength and weakness of different algorithms, hopefully allowing a deeper understanding of their behavior and stimulating discussions about finding a common judging ground.

Abstract : Effect of the illuminant/source; colour rendering
János Schanda
University of Pannonia
Two seemingly contradicting, but in reality mutually complementary phenomena in colour appearance are colour constancy and colour rendering. While we usually adapt quite well to big differences in correlated colour temperature of the illuminating light source, i.e. white objects are seen to be white under changing illumination colour (within the limits of “white light”): colour constancy; the colour of objects might change dramatically if the spectral power distribution of the illumination is changing (e.g. using instead of an incandescent lamp a fluorescent lamp of approximately equal chromaticity). This change is characterized by the colour rendering of the light source.

CIE standardized a method to determine a colour rendering index, and suggested a method to deal with the “colour inconsistency” question. The paper will outline the colour rendering issue, and will show where the present day method has its shortcomings. For the industries dealing with coloration – where metameric matches have to be made between the object colours produced in different materials – the proper description of colour rendering becomes with the introduction of modern light sources more and more demanding. The visual appearance of coloured scenes if illuminated by white LED sources, whether based on blue LED plus phosphor, or by mixing the light of red, green and blue LEDs, might be quite different from that predicted by the colour rendering index. Also colour constancy might break down under these modern light sources.

The paper will show on one hand that one has to go even a step further back, and use cone fundamental based colour matching functions to be able to properly describe the colour appearance of objects illuminated with modern light sources, and will – on the other hand – suggest new methods to describe the colour rendering properties of light sources. These methods try to quantify basic appearance phenomena, as e.g. colour harmony, and we will show that such descriptors can give good estimation of the colour quality of the source.

Abstract : Models of Color Vision for Image Processing
Sabine Susstrunk
Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Many image processing algorithms treat color information only as a three dimensional extension to grey-scale, and thus neglect the important contribution of color to visual perception. "Smart" color processing can improve the visual results of many algorithms. Using examples such as high dynamic range rendering, color constancy, demosaicing, and dynamic texture synthesis, we show that modelling how the human visual system processes color information can improve the performance of many imaging tasks.

Abstract : Light stability of coloured artefacts in museum collections: are we still in the dark ages?
Joyce H Townsend
Senior Conservation Scientist, Tate, and Visiting Professor in Conservation Science, University of the Arts London
with Stephen Hackney, Andrew Lerwill and Jacob Thomas, Tate Conservation Science

email : joyce.townsend@tate.org.uk

Museums and galleries have a duty to preserve historic artefacts, as well as contemporary and ephemeral material which will in the future define our own social, technological and design history. Museums and galleries also aim to provide better public access to their collections, to ever wider audiences. These conflicting ideas lead to pressure to increase display periods, or to use interesting and dramatic lighting in combination with multi-media displays.

Risk assessments of the relative sensitivity of colorants, whether used for images or for printed material, are often not founded on a thorough knowledge of the materials found in artefacts, but are derived instead from knowledge of production processes. Since museum collections may include all natural and man-made materials, this is hardly surprising.

The classical definition by Thomson in The Museum Environment (Butterworths, first edition 1978) classified coloured artefacts into ‘sensitive’ and ‘non-sensitive’ ones, and he defined an illumination level that allows persons with ‘average’ vision to recognise colours, then multiplied this by a ‘fudging’ or safety factor of three. Non-sensitive artefacts could be displayed at three times this level of illumination, he suggested. Preventive conservators and conservation scientists are not nearly so dogmatic about lighting levels today, and risk assessment has moved into the conservation arena as a policy tool, but some questions remains very largely unanswered: how much light can a given object withstand before it will change irreversibly? How much damage is acceptable, as the price of access to and appreciation of a collection?

If too many artefacts are classified as ‘sensitive’, due to lack of in-depth knowledge, then conservators may well be accused of ‘crying wolf’ when they try to restrict light exposure. Some modern printing techniques are impressively light-stable, in comparison to pre-modern technologies such as textile dyeing. There is a great need to integrate contemporary research into colour stability of industrially-produced materials - such as the inks used by ink-jet printers - into the consciousness of conservators. There is an equally compelling need to make more objective risk assessments of the lightfastness of existing artefacts, and to re-examine the conditions under which they are displayed and/or stored. This is a focus for current research at Tate into low-oxygen environments, and its implications will be discussed during the lecture.

Abstract : Perception of color and image quality of films (projected in cinema or displayed on a TV screen)
Alain Trémeau
Université Jean Monnet St Etienne
In cinematographic post production, digital processing of images - called Digital Intermediates (DI) – replaces more and more the traditional film workflow. Digital post production requires the preview of DIs with a reproduction of colors, dynamics and resolution comparable to the final film projection. In DI, the film timing operations are replaced by digital color correction. During color correction high quality CRT displays and digital projection replaces the film, however, final results in color and dynamics are always assessed via a film print. To ensure film-like color rendering on a target display other than a film projector, a 3D Look Up Table (LUT) based color transformations need to be used. This LUT is calculated from color measurements on film and on target display. This presentation analyses current post-production workflows with respect to their needs in color management. We discuss the principle approach on how to obtain a LUT considering practical constraints. This presentation describes how to enhance the quality of measurements of film, how to reduce the execution time of measurements, and how to conduct subjective tests in order to ensure the quality of digital film look. Three subjective tests will be described. A first test – called Double-Stimulus Continuous Relative Quality Scale method (DSCRQS) - derived from the ITU-R BT.500-10 DSCQS test method allows non-biased, temporal digital versus film comparison. A second test, derived from ITU’s DSIS and SDSCE tests, allows more sensible side-by-side comparison, but is biased. Finally, a third test will be introduced as a free in-depth side-by-side comparison to collect expert’s comments. The tests are based on a number of principles such as use of real film content, consideration of use cases and limitation of bias. A first test run of a test of second type which had been successfully applied to measure the effect of a change of a film projector’s bulb in a color correction theatre will be presented.

Another problem in cinematographic post-production is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.) The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. Effective gamut mapping algorithms are therefore used to maintain the best rendering of colors within the device gamut, whilst applying a “soft clipping” to color outside the gamut. This requires also the use of production metadata in film processing, real-time processing and adaptation to media. The author is assumed to specify conditions as known from digital graphics arts. When reconstructed and displayed on a specific device, such visual specifications (even psycho-visual specifications) have to be considered. To control image pre-processing and image post-processing, these specifications should be contained in the film’s metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

References
A. Trémeau, H. Konik and P. Colantoni, Color Imaging Management in Film Processing, Invited paper, Proceedings of SPIE, Internet Imaging V, San Jose (CA), Vol. 5304, pp 24-36, jan.2004.
J. Stauder, L. Blondé, J. Pines and A. Trémeau, Method and system for estimating image quality, European Patent EP05300572.4, Thomson licensing.
J. Stauder, L. Blondé, A. Trémeau, J. Pines, Evaluating Digital “Film Look”, IS&T, CIC’2005, Nov. 2005, Scottsdale (CA), pp 308-312, 2005.
J. Stauder, J. Thollot, P. Colantoni and A. Trémeau, Device and method for characterizing a colour device, European Patent, Thomson licensing, 2006.
J. Stauder, L. Blondé, J. Pines, P. Colantoni and A. Trémeau, Film look in digital Post Production. CGIV'2006, Leeds, UK, June 2006, pp 81-86

Abstract : The Many Mispellings of Fuchsia
Nathan Moroney
Hewlett Packard, USA
This course will review some general and some specific results of a multi-year, mulit-lingual online color naming experiment. After collecting over 30,000+ color names in English it is possible to investigate specific aspects of color naming, such as trends in non-basic color naming or the relative accuracy of color names based on objects. Some applications include designing online participatory experiments, color visualization and selection tools and other applications. This course will provide general background on the current research trends in color naming research and then will focus primarily on how the use of the world wide web has allowed the topic to be explored in a different manner than has been previously considered. One specific application of the exprimental data, the derivation and design of an online color thesuarus, will be presented as a specific example of current and possible future directions of color naming research.

Abstract : When computers look at art: Rigorous analysis of color, lighting and form in master paintings
David G. Stork
Chief Scientist, Ricoh Innovations
Lecturer, Stanford University

The growth of high-resolution digital color imaging of master paintings and the expansion of techniques from rigorous computer vision and image analysis together provide the foundation for a new academic discipline: computer image analysis of fine art. This new discipline goes beyond the mere processing of images for improved presentation to include analysis for answering art historical questions. As such, this field complements and extends traditional art history to shed new light on problems in the humanities that have vexed art historians, curators and conservators.

Early successes of this new discipline include: the processing of color and multi-spectral images to reveal hidden works such as the Archimedes palimpsest; computer modeling of the discoloration of varnishes in Renaissance paintings to predict and render the effects of conservation treatment; computer modeling of the fading of fugitive dyes and then "reverse aging" the color in digital images to recover the original color schemes in Medieval tapestries; applying sophisticated shape-from-shading algorithms to estimate the location, number and color of illuminants in paintings by Georges de la Tour and Caravaggio to reveal the working methods of these artists; multi-scale wavelet processing of brushstrokes in a group portrait by Perugino to reveal the number of contributing assistants; rigorous computer estimation of perspective transforms in paintings by Jan van Eyck for quantifying perspective and form anomalies to understand the working methods of this artist; Chamfer-metric-based quantification of fidelity in works by Jan van Eyck to determine whether he used drawing aids; computer single-image metrology to reconstruct the virtual or fictive three-dimensional spaces depicted in realist paintings and murals such as by Masaccio and Piero della Francesca; construction of computer graphics models of tableaus within paintings to explore "what if" scenarios in lighting and composition to understand the artistic decisions of painters such as Jan Vermeer; image analysis and modeling of the degradation of printed strokes to date Renaissance printed books and etchings; digital fractal analysis and pattern classification of drip paintings to determine whether they were executed by Jackson Pollock or are instead forgeries; and more.

Digital analysis of collections of paintings by a single artist or group of similar artists can provide a new foundation for stylometry-the quantification of artistic style-that complements traditional art historical methods. Progress in this new discipline with be driven to the extent we can integrate the rigor of scientists with the connoisseurship of humanists.

This talk will conclude with a number of open research questions and suggestions for cross-disciplinary collaborations among imaging professionals, computer scientists and humanistic scholars of the visual arts.

Joint work with Antonio Criminisi, Marco Duarte, M. Kimo Johnson, James Schoenberg, and Christopher W. Tyler

Abstract : The Interaction of Art, Technology and Consumers in Pictures Making
key note speaker : John McCann.
McCann Imaging
Pictures can be paintings, silver-halide photographs, digital photographs, video, or phone displays. They are the result of both technology and consumers’ use of images. We know of pictures made as long ago as 14,000 BC in the Lascaux Cave paintings. Most images up to the thirteenth century were narratives of objects of interest, without much thought devoted to scene reproduction. With the adoption of Brunelleschi’s geometrical perspective in 1400 AD, pictures added more realistic shapes and sizes to the rest of the scene around these objects. With the addition of chiaroscuro, around 1500 AD, the illumination became as important as the objects. As well, chiaroscuro introduced rendering high-dynamic-range scenes in low-dynamic-range media. The advances in Renaissance painting were sponsored by patrons of art, such as the Medicis.

The advancement of photo technology since Fox-Talbot (1835) has been remarkable. Photography has reinvented itself every generation over the last 170 years, and it continues to do so. With the increase in use of photography in the last half of the 18th century, photography moved into the painter’s space of scene reproduction, and painters moved elsewhere. However, technology provides a fraction of the story that has controlled the recent history of pictures. The artist-industrialist J.C. LeBlon invented commercial color printing in around 1700, and hand-painted color photographs were common in 1861 when James Clerk Maxwell made the first photographic color separation images using color filters. Color printing preceded its photomechanical implementation by one and one-half centuries. The legacy of making color photographs includes many ingenious cameras, subtractive dye-transfer systems, and additive color films. It was not until the 1940’s that the subtractive dye-coupler process became the universal color capture technique. With this highly sophisticated, multilayered film came a universally-used, low-cost, high-volume, product with multi-national corporations as film suppliers, replacing amateur and small business sources.

The digital picture added computation, instant global transmission, and long-term storage problems. Computation allows image processing for high-dynamic range scenes, as well as aesthetic modification. Image processing algorithms possible with digital images can perform tasks that artists used to do. In addition, digital manipulation has become a principle tool of artists. Instant transmission over the internet changed both accessibility and the standards of information content. In 1990 the universally accepted standard for a photograph was the information captured in a 35mm negative, or about 9 megabytes. Today, the typical web image is less than 0.1 megabytes jpeg. Daguerreotypes made in the 1850’s are as vibrant today as when they were made. However, digital images are easily lost in media that is so easy to use, but is inherently unstable because of obsolescent storage devices and unstable media. The majority of cell phone images have only the lifetime of a telephone call.

The artist, the patron, the dedicated amateur hobbyist, the commercial service provider, and recently multi-national corporations are all image makers. Technology has changed who makes, and who uses pictures. It is the combination of technology and social use that controls the lifetime of an imaging technology, as well as the lifetime of individual images. This talk will describe the details of many advances in imaging technology and how they were replaced by further advances, or by consumer preferences.

Abstract : The assessment of image difference and quality
Stephen Westland
School of Design, University of Leeds, UK
This talk will focus on two issues that are becoming increasingly important in the imaging world; the assessment of image difference and the assessment of image equality. Psychophysical methods for the evaluation of these two important properties will be discussed, and some possible computational approaches for their calculation will be described. Whereas colour-difference metrics have been used extensively for spatially uniform samples for many decades, the assessment of colour differences in images poses some difficult challenges. Some of the proposed solutions will be explored. Image quality is important for the optimization of processing and rendering in applications such as mobile phones and televisions. Some computational approaches for the assessment of image quality will be described including some new results from the Colour Imaging group at Leeds University.

Abstract : The practical realities and daily disappointments of colour!
Michael Crain & Angela Brown
Cranfield Colours Ltd. Printing Ink Manufacturers, Cwmbran, Gwent
Speaking from experience, ink manufacturers Michael Craine and Angela Brown of Cranfield Colours Printing Inks explore and explain some of the frustrations, limitations and misunderstandings encountered almost on a daily basis when attempting to reproduce colours on a range of substrates.

Cranfield Colours have been producing Lithographic printing inks for three generations and as well as supplying standard ink products, the company enjoys a strong reputation as the source of corporate and special colours for printers in the UK and overseas. But what happens when a graphic designer changes the substrate, or the printed job is to be laminated or UV Varnished? Why was the printed job accepted at the end or the printing press, but later rejected in the client’s boardroom?

Practitioners in the ink industry (regrettably shrinking in the UK) seek to bridge the gap between the clinical accuracy of the pigment industry and the black magic and witchcraft of print, reaching commercial compromises that will satisfy, graphic designer, end-client and printer alike!

The main purpose of their presentation will be to show the limitations in the industrial and commercial context of colour control and reproduction. The presentation will:

1 examine the standard terminology, both correct and colloquial used to describe colour.

2 using examples, demonstrate occasions where expectations have been met through compromise, and regrettable situations when a solution
   has not been found to the satisfaction of the client.

3 seek to generate sympathy amongst delegates… by explaining the growing restraints on the ink industry from commercial pressures, modern
   press design & configuration and paper and board quality!


back