Keywords

1 Introduction

1.1 Rewriting Floch Today

If the great semiotician Jean-Marie Floch were still alive and could rewrite today his famous marketing study on the usage and consumption behaviour of Parisian metro users [1], would he use more advanced tools than those available forty years ago?

Perhaps he would! i) Floch today would use digital tools to acquire data in the field. ii) He would also assume a theoretical framework resulting from thirty years of evolution of structural theory in the Greimassian tradition. He would perhaps use a kind of semiotics not limited to the analysis of the ‘text’ but, by acquiring the perspective of the “semiotics of practices” [2] that he developed, Floch would now have a model of the “generative process of the plane of expression” articulated in levels (figure, sign, text, object, practice, strategy, ethos).

i) In the study that he conducted in the 1980s, Floch had acquired information on the consumption and usage behaviour of the users of the Paris metro essentially by means of participant (ethnographic) observation supplemented by quick sketches from life; finally, he had collected quantitative and qualitative data by means of a systematic interview campaign and the collection of some narrative accounts.

Today, travellers in the capital’s metro network are almost all equipped with wearable technologies and are therefore easily traceable in the speed and form of their spatial routes, as well as in their consumption choices in web channels; for some of them, it would be even possible to detect the values of biological parameters indicative of part of their emotional states. The data set collected with digital tracking could today be supplemented by interviews or tests in a more traditional format, constituting an immense data set of traces of gaits and trajectories, consumption preferences, narratives, photographic snapshots, etc. Moreover, today this huge data set could be variously visualised and mapped, synthetically interrogated according to parameters referring to different classes of qualitative factors.

Despite the current development of the means of observation, recording and analysis, it is probable that Floch today would not change the essential structure of his study at all. The on-site analysis of behaviour in the same place would still be necessary to directly compare the behaviour of users in the same places in order to highlight different ways of valorising spatial displacement and the use of time. The interest would not lie in forming a collection of “social types” or “psychological types” at all, but would always lie in constructing an axiology of different ways of valorising the same place.

ii) We believe that, despite the new theoretical acquisitions of Greimassian tradition semiotics, Floch would not at all change the four extreme terms indicating empirically detected behaviour: “explorers, sleepwalkers, professionists and flâneurs”. These four terms derive from the projection onto the semiotic square (Fig. 1) of the semantic category of “continuity vs. discontinuity” of the given experienced space.

Fig. 1.
figure 1

The taxonomy of spatial enhancement modes used by Jean-Marie Floch in the analysis of the Paris metro users' behaviours, Interpretation made by the authors from the semiotic square of J-M. Floch [1].

  1. I.

    [1, 1] The term for the enhancement of “discontinuity” features in spatial perception is embodied by ‘explorers’: those who enjoy changing perceptual rhythms for cognitive purposes, wanting to identify, compare, correlate and map the places they pass through.

  2. II.

    [–1, 1] The semantically opposite term is that of the “somnambulists”: those who value pure spatial “continuity” and allow themselves to be carried away by the flux of the crowd and – often immersed in reading or listening – anaesthetise themselves in everyday continuity by appreciating the perception of a comfortable regularity and spatial fluidity.

  3. III.

    [–1,–1] The contradictory term with respect to the adventurous space of the explorer is that of spatial “non-discontinuity”; it is embodied by the “professionals”, i.e. those who consciously minimise the route, fluidly avoid obstacles, use the space of stations in the purely instrumental functionality of their equipment.

  4. IV.

    [1, –1] Finally, the semantically opposite term to “professionals” and which denies the space of “sleepwalkers” is the one that valorises the pure “non-continuity” of the local space; it is embodied by the figure of the “flâneurs” understood as those who stroll seeking the unexpected encounter, valorising the incidents of the route, undertaking deviant programmes that enrich the potential of the journey.

In these four terms Floch does not indicate ‘social types’ or psychological profiles, but moments and ways in which subjects grasp the given (morphological and mereological) affordances of a place in their course of action. Obviously, these four extreme cases are only the terms in which classification parameters are maximised and presuppose an infinity of intermediate cases between [–1, –1] and [1, 1]. Moreover, in the course of their experience, each traveller goes through a part of this constellation of states in exploiting different potentialities and virtualities of the same objective situation.

Whoever imputes the different ways of enhancing the same space to the contingent personal inclinations of each user must also admit that these subjective inclinations objectively encounter a different spatial resonance depending on the given situation. Thus, it is the environmental affordance itself that is more or less at the measure of “explorers, sleepwalkers, professionals or flâneurs”. An entire environment can only be suitable in different and clear measure (expressed in conditional probability) to a given form of spatial enhancement; and only within certain limits can the subject adjust what he feels in and of the ambient space.

Assuming that these differences in valorisation possibilities can be measured, we propose to take Floch’s four terms as four spatial types (morphological and mereological), all of which are possible to be felt in the same place, but to an objectively different extent.

An environmental affordance is a phenomenon emerging from a multitude of factors; to study it through a semiotic approach is to highlight which aspects of an object and a practical scene correlate significantly.

However, the concept of affordance arises in quite different, even opposite terms to any semiotic approach. “Affordance” is in fact a notion that originated in the psychological studies by James Gibson, swhose contribution is very different from the standard cognitive one and – as Costall and Morris [3] have documented – is still very much misunderstood from the most elementary psychology texts.

We would not add ourselves to the list of those who try to assimilate Gibson’s dissidence with computational, inferential [4] or even semiotic cognitive models of the structural school. However, the importance currently acknowledged to the notion of affordance in design theories [5] requires some clarification.

1.2 Affordance or Factitiveness? Objects or Environments?

In the field of design studies and theories, the notion of ‘affordance’ formulated by psychologist James Gibson [6] has played an important role, especially in his latest book: The ecological approach to visual perception [7]. ‘Affordance’, according to Gibson, is what our lived body feels of its own possibilities of potentially interacting with objects in the surrounding environment: it is our bodily feeling of potential feasibility in relation to things and semi-things perceived in the moment; for instance, it is the feeling of being able to ‘walk’, ‘grasp’, ‘embed’, ‘throw’, ‘climb’, ‘fall’, ‘shelter’, ‘sit’, ‘immerse’, ‘ingest’, ‘feed’, ‘warm up’ and any other action that the parts of an environment can (potentially) allow a subject acting in it.

This was followed by the idea that the design of an object is understood as the prefiguration of its affordance, but in the field of design studies the notion of ‘affordance’ was initially incorporated [8] in ergonomic terms, it was understood as the (empirically measurable) ability of a physical object, or of a human-machine interface, to make the user perceive the right way to use it, without the need for the user to be instructed to do so. In functionalist design theories, the notion of the ‘affordance of objects’ objectively accounts for ergonomic properties of prostheses and tools: e.g. the seatability of a chair or the habitability of an interior.

The main idea that was retained from Gibson’s notion is that we perceive our surroundings in a completely unreflective, automatic, synesthesic, pre-semiotic way, simply by grasping the ‘affordances’ offered to us by the actual surfaces of things plunged in the physico-chemical pregnancies of the atmosphere. However, in the applications to design, the concept of affordance has lost some features of its original meaning; for instance, [9, 10] affordances that could be real or fictitious, perceivable or non-perceivable, acquired as expertise by the user were admitted. To the ability of objects to suggest practicable actions with them was added the idea that perception can educate itself to grasp new affordances.

Thus the term ‘affordance’ also refers to the object’s ability to teach its use, a concept that has found increasing popularity especially in digital interface design, contributing to the very notion of ‘usability’ established in standards such as ISO 9241-11 (1998) and 9241-210 (2010). However, conceiving affordance as emerging from an expressive process came to disavow its originally non-semiotic or pre-semiotic meaning.

Acknowledging the interactive character of affordance, one understands the fact that, even if a subject believes he is making an everyday object perform in a course of action, he realises that this course of action is bound not only by the object’s operative functionality, but also by its active communicative functionality. It is thus admitted that the user acts on the tool in the terms in which the tool itself acts equally on the user, in a series of reciprocal manipulations and counter-manipulations. Thus Jacques Fontanille – as already exemplified by Jean-Marie Floch – proposes to replace the psychological notion of affordance with the semiotic notion of the “factitiveness of objects” since: «… Ce que l'affordance désigne sans le distinguer, le concept de ‘factitivité’ permet déja de le décliner au moins en trois types différents et complémentaires: ‘faire-faire’, ‘faire-savoir’, ‘faire-croire’»Footnote 1. [2, pp. 37-8].

In other words, affordance, when viewed through the modality theory of Greimassian semiotics [11, pp. 121 and 102-4] translates into ‘factitiveness’, i.e. a typology of possible reciprocal manipulations between user, objects and environment that concern both the virtual and potential use of an object, as well as its actualised or realised use in a course of action.

We believe that it is not acceptable to equate Gibson’s phenomenological theory either with a kind of ‘imprecise semiotics’ or with an anti-semiotic fanaticism; rather, it should be better considered in its more recent version.

It should be made clear that throughout his latest book Gibson sketches a much more articulate definition of affordances for at least two essential and often overlooked points.

1°) Gibson clearly distinguishes, on the one hand, the direct perception (pre-intellective, not mediated by any processing) of the physical environment and, on the other hand, the understanding of objects that support representations on themselves and that are immersed in the environment, among which he also includes psycho-perceptual tests that highlight optical illusion phenomena. This gives rise to automatic environmental affordances in which, however, the perception of specific representational artefacts also comes into play, requiring the unfolding of clearly semiotic cognitive processes.

‘Environmental affordances’ are understood by Gibson as objective phenomenological properties because they are defined as emerging from the encounter of the objectivity of the percipient subject’s lived body with the objective morphology of bodies in the shared environment. However, the distinction of representational parts in the natural environment entails a semiotic process for the subject, i.e. the generation of a plane of expression.

2°) According to Gibson, there is no direct visual perception of space itself; what we directly see is only the spatial deployment of surface textures; that is, we see in 2.5D. He – like Florensky [12] – conceives the spatial content in the consciousness of visual perception as a geometric construct, an abstraction resulting from cognitive processing that exceeds instantaneous perception. Thus, the transition from the direct vision of the surface of things to a consciousness of environmental space in its totality of presences does not happen in the same way.

As an example: we cannot say that we see a painting and the room where it is exhibited in the same way, or the stage area and the stalls of the same theatrical space; there are thresholds and regions of space within which we carry out interpretative processes of an explicitly semiotic type, if not acts of actual coded reading of a merely depicted space.

This does not detract from the evidence that we all feel an overall, holistic and objective feeling of an environment anyway, even if this feeling is amended in the course of experience.

This fact is obviously of great practical importance in interior design and satisfactory answers are often sought from its theory.

When design is understood as the total planning of inhabitable environments, environmental affordances are more relevant than objectual ones, especially – as happened a century ago in the schools of the modernist avant-gardes, from the Vchutemas to the Bauhaus – in the creation of ‘interior environments’ organically configured to exert intense and sometimes radical aesthetic properties.

If the holistic feeling that an environment offers is fundamental, can we only rely on the poetic competence of the creator?

But if what counts above all is the semantic clarity of the parameters taken on by the project, then the priority of a fundamental structure of morphological categories is decisive.

In this second case, the question doubles:

  1. i)

    Can the concept of environmental affordance account for this holistic, unreflected feeling?

  2. ii)

    Can the concept of factitiveness semiotically refract the holistic feeling of a place into its signifying components?

2 Experimental Surveys in Artificial Interior Aesthetics

In order to attempt a documentable answer to the questions posed above, we undertook the study of the potential of Floch’s axiology of spatial enhancement by testing it with current digital probing tools through platforms instructed with deep learning algorithms.

We believe that these tools of an artificial aesthetic can provide us with enormous and new possibilities of correlation between descriptive parameters, correlations that can prove to be more or less relevant, fragmentary, doxastic in deciding the way in which a set of subjects experiences the feeling of a place.

In the specific case, we are testing the analysis of cases and aesthetic categories in the interior design using digital tools and prioritising an axiology derived from Floch (ex. Fig. 2). The current study concerns the conventional genres of interior design and is conducted by means of Deep Learning tools for the processing of documentary data sets concerning different social domains: healthcare, catering, museography. In this case, Floch’s semiotic square constitutes a first map with two orthogonal coordinates that identify the initial value pair of each record processed according to subsequent evaluative dimensions so that it can then be found as an element of a final atlas in continuous stabilisation.

Fig. 2.
figure 2

Illustrative images of spatial enhancement categories in three interior design domains (museography, foodservice and healthcare) selected by a parametric web search software. The construction of these data sets is the first part of the research programme pursued by the authors.

2.1 Top-Down/Bottom-Up: ‘Aesthetic Categories’ and ‘Image Descriptors’

Initially it is enough for us to accept the fact that “culturally conventional environments” are given and that these (generally aesthetic) categories are ‘ideal objects’, hence, ‘social objects’: one can give lists of terms, labels. These are cultural categories implicit in the distinctions between literary, cinematographic, theatrical and musical genres, especially in the genres of interior design or typical landscapes, up to the thematisations of museography, theatrical scenography and retail design, made explicit in the related marketing studies on commercial spaces.

Without needing to discuss and specify the aesthetic or historical-critical meaning of these terms, we can initially accept them as simple verbal labels that are preferably associated with interior spaces represented through texts, images and videos, contenting ourselves with the simple doxastic and statistical value of these labels.

To this end, software designed to assist interior design by providing sample collections of stylistic classes has emerged in recent years. These tools are capable of processing images and digital documents of any format from immense data sets in order to derive synthetically representable classifications according to parameters referring to different classes of qualitative factors, such as the sensorial categories conventionally attributed to materials, shapes, textures, colours, spatial patterns, potential paths, evocative values, etc.

In more specific cases [13] the result of these doxastic analysis applications is the production of a series of collections of interior images found on the web and associated with the statistical distribution of terms – present in the textual descriptions contextual to the images – referring to stylistic categories and prototypical atmospheres.

For the time being, these results are of little critical value, at most only useful for listing a few stereotypical classes – “romantic”, “pop”, “casual”, etc. – and calculating their statistical consistency in a given repertoire. However, the development potential of this software and the applications that can already be derived from it are multiple and very important, thanks to the rapid development of calculation systems based on deep learning algorithms applicable to ‘pattern recognition’ and ‘pattern production’ in various expressive formats (visual, acoustic, verbal, dynamic, etc.).

2.2 The True Fakes

First of all, it must be remembered that applications based on deep learning algorithms are nowadays not limited to supervised learning, but can also train themselves by analysing preselected data sets – e.g. a homogeneous corpus of images – either in order to identify the rules that give coherence to the input data set, or in order to measure the exceptions of new input data, or even to deliberately produce new analogous data – e.g. images – that serve as new exempla consistent with the discovered rule.

As an example, the artificial production of ‘fake’ graphic, pictorial and musical works has become almost fashionable, especially using software with Generative Adversarial Network [GAN] algorithms that extract the statistical weights of image and text descriptor data from the given corpus of original exempla and then define and learn the co-construction rule – through a dual (adversarial) deep learning procedure –producing new (fake) exempla of the given set with this rule.

Furthermore, the possibility of analysing data corpora in composite formats – whether visual, acoustic, textual or video – allows these applications to construct new categorisations and new taxonomies, not limiting themselves to the mere recognition of a few classes of objects depicted in digital images according to an already given classification (a priori). They can in fact derive new taxonomies that will only be given a posteriori. That is, they can explore taxonomies only in fieri, following the semantic principle of ‘family similarities’ along a learning process that can be observed by us, step by step, as the construction of family categories is drawn up through the analysis of immense lexical and iconic databases accessible online.

The most creative applications have not been conceived, so far, for descriptive or historical-critical purposes; they were explicitly generated as new production tools for artists. For instance, the DALL-E software [13] is capable of producing new hybrid yet perfectly coherent visual images from a huge set of lexically labelled source images; these images are produced as iconic responses to questions that the user formulates with simple verbal sentences.

The sense of a historical-critical use of applications of this kind remains to be explored.

After all, the great panoply of software that filters information in the wearable technology devices that, by facilitating our web searches, fill our daily lives, as well as the new artistic research tools generated from the rapid developments in AI, form an invisible but pervasive artificial aesthetics [14] that is still waiting to be integrated into the aesthetics produced by human reflection.

The essential question for our discourse around the possibilities of objectifying the notion of environment is: do applications based on deep learning also make it possible to move from an approximate, doxastic investigation to a possible morphology of interiors?

2.3 From Subjective Stylistics to Objectifiable Morphometries

We would be wrong to believe that the main purpose of software with GAN-type deep learning algorithms is only to produce plausible hybrids or plausible fake works from corpora of real exempla: fake works such as the countless fictional 19th-century Chinese landscapes composed in 2021 by Alice Xue’s software [15] or the fake portrait of a hypothetical Edmond Belamy made in 2014 with the algorithm of the French collective Obvious, a work sold for between USD 7,000 and 10,000.

Curiously, pattern recognition software has been proposed much earlier for exactly the opposite purpose: to discriminate cases of fake works from original works, especially in situations where their perceptual complexity exceeds the human abilities of processing and comparison. For instance, such a situation is the problem of deciding on the attribution to Jackson Pollock of a dripping paint work of dubious or suspicious provenance. The complexity of the calligraphic ductus in the dripping paint technique makes traditional morphological attribution methods in art history ineffective; hence, the use of digital morphometric tools was attempted.

Such a(n) (artificial) solution for this purpose was suggested in 2015 by computer scientist Lior Shamir; he adapted ‘pattern recognition’ software originally designed to automate histopathological analyses (to recognise specific morphologies of cancerous tissues) [16]. The original software was designed to be trained with countless images of histopathological slides, but Shamir adapted it to be trained only with the digital images of 26 dripping canvases believed to have been performed by Pollock between 1950 and 1955. He asked the system to extract from each of the 26 digital images the numerical values of various descriptor parameters – e.g., statistical distribution of pixel intensities, colour, position, edges, shapes, regions, fractal order, polynomial decomposition, etc. – thus values that do not concern a verbalisable semantic level, but only pure eidetic characteristics of the digital image.

Shamir constructed the discriminating rule between true and false Pollock by comparing these resulting 26 data sets and i) sorting the descriptors by resulting importance, ii) selecting 25% of the significant descriptors, iii) writing the rule in the form of Fischer’s linear discriminating algorithm.

This rule was tested by Shamir by subjecting random sequences of images to the software: both original works by Pollock and works by artists emulating his dripping technique. Shamir reports that in 93% of the cases he randomly tested, the software was able to correctly distinguish original works from fake ones. This excellent result highlights three points:

  1. 1)

    that even a painting technique with a high degree of gestural randomness retains individual characteristics;

  2. 2)

    that such individual characteristics of an eidetic order can plausibly be identified at a morphometrical level as one could identify the characteristics of a calligraphic ductus;

  3. 3)

    that the successful scientific use of pattern recognition software for the purpose of morphometric image categorisation always requires two characteristics:

  4. a)

    a clear initial separation from all semantic considerations in order to focus solely on the plastic (abstract) and morphometric characteristics of the image;

  5. b)

    a deep learning processing of the recognition rule made only on ‘a posteriori’ data and based on predominantly frequentist statistics.

Because of these characteristics of scientific correctness, Shamir’s algorithm is not suitable – as GANs are – for producing ex novo (a priori) images of real fake Pollocks. The application is made to measure past facts – measuring them on the basis of a (predominantly ‘frequentist’) retrospective statistic – and not to predict future facts.

We report these findings because they exemplify – by analogy – two aspects of the doxastic study of conventional environments in interior design.

  1. I)

    It would make no sense to attempt a classification of conventional environments, even if it is useful to detect provisional and specific local taxonomies, constructed ad hoc, case by case.

  2. II)

    The absurdity of establishing a typology of conventional environments does not forbid the fact that, instead, a specific morphology can be given.