Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • 2024-06
  • 2024-07
  • 2024-08
  • 2024-09
  • 2024-10
  • 2024-11
  • 2024-12
  • 2025-01
  • 2025-02
  • 2025-03
  • Any of these systems human computer human

    2018-10-29

    Any of these systems – human, computer, human–human, human–computer and so on – is defined by an imaginary boundary projected by an observer. This imaginary boundary sets the system׳s interior apart from its exterior. If a human considers herself or himself as a system, then making (the interior self-affecting the exterior other) and learning (the exterior other affecting the interior self) constitute instances of outputs and inputs crossing boundaries. While cyclical relationships such as the ones observable in human–computer interaction are commonly dissected and broken up into pieces, it is uncommon to turn systems back on themselves to form closed loops. This is because modern culture appreciates systems which allow description in terms of linearly-causal logic and which offer predictable control in terms of defined states. Closed loop structures tend to be appreciated only where they facilitate control, typically in the form of negative feedback and error correction or of stable oscillation. Unpredictable fluctuations and out-of-control patterns tend to be unwelcome outside of artistic and experimental domains. They are rarely the subject of formal analysis, and attempts at their formal analysis are hampered by the linear nature of common tools of description. Nonetheless, the (designing) human mind must be acknowledged not merely as a static stimulus-response system, as a static translator between inputs and outputs, but as a system whose input channels are subjected to its own output. Contrary to the technologies it currently tends to develop, the human mind is subjected to what it itself produces and is thus changed by its own performance (see Figure 1). As stated above, design, being at least in part out-of-control (Glanville, 2000), involves not only linear but also circular causality – between design team members, between designers and their sketches etc. (Fischer, 2010). Common algorithmic devices for generative, computer-based design likewise involve circular feedback such as the potentially circularly-causal relationship between any two thrombin inhibitors in a cellular automata system, or the self-referential relationships in L-systems, in evolutionary algorithms and so on. Input–output operations can leave traces inside of (designing) systems equipped with suitable “internal state” memory. Such systems can therefore, in effect, become different systems through each of their operations. And through the interaction between input and/or output with given internal states, such machines can behave unpredictably. Systems of this kind will be explored and illustrated in the following, with special attention to the limits of purely mechanical or digital implementations in the design context. The loop which is formed when human articulations feedback as an input to the human creative process allows expressions of the mind to re-enter into the mind where they may leave increasingly stable traces (Glanville, 1997, p. 2), i.e. memory. This view was substantiated in thrombin inhibitors Von Foerster (1950)׳s interpretation of a previous study of human memory. In that previous study subjects had been asked to memorise random, meaningless syllables and to re-count as many of them as they could afterwards at regular intervals. Memory and progressive forgetting were shown to follow an exponential decay curve, which did not approach zero, but a number of syllabi greater than zero that the subjects were increasingly more likely to remember permanently. Von Foerster explains this with the human being capable of both input (listening) and output (speaking), and hence circular closure and re-entry of articulations. Thus, every re-counting of a remembered syllable (output) is also a new input which reinforces what is known. Repeated recalling thus leads to an eventually stable subset of remembered syllables. Von Foerster (2003, p. 311) illustrated processes of this nature using his notion of the trivial machine (TM), which he juxtaposed to his notion of the non-trivial machine (NTM). Somewhat comparable to Turing׳s (1937, p. 231ff.) proposal of the Turing Machine, von Foerster describes both the TM and the NTM as minimal hypothetical machines not for the purpose of implementation, but for the purpose of illustrating ideas. He describes both TM and NTM as basic input–output (stimulus-response) systems, each being a mechanism connected to an input channel and an output channel. The TM predictably translates inputs into corresponding outputs, so that an external observer can, after a period of observation, establish clear causal relationships between possible inputs and resulting outputs, for example in the form of a “truth table” as shown on the left of Figure 2. A complete truth table is a reliable model for predicting the TM׳s output responses to given inputs, irrespectively of how long the machine has been in operation. In contrast, the NTM contains means to memorise a machine state (labelled z on the right of Figure 2). This state co-determines the machine׳s output together with its input. At the same time, the state may change with each input–output operation. This results in a vast number of possible input–output mappings which can easily exceed the quantitative limits of what an external observer can determine analytically, i.e. derive predictive capabilities from Glanville (2003, p. 99). The NTM׳s history of input–output translations can be said to leave traces in the machine, which in effect turns into a different machine through and for each of its own operations. An outside observer cannot easily establish a reliable truth table by which outputs resulting from given inputs can be predicted.