Saturday, 15 October 2016

Repetition and the Apophatic in Music: An Information Theoretical Approach

All analysis involves the application of constraint on experience. “Constraint” itself can be variously defined as background, absence or context – it is the domain of the “not there”. In Information Theory, it is measureable as Shannon’s ‘redundancy’, the inverse of information. Recent scholarship in Information Theory has borrowed a term from theology, labelling the broader domain of “not information” (beyond Shannon) as Apophatic.
At the heart of the Shannon notion of redundancy are concepts of repetition, similarity, identity and analogy. Since Hume, the identification of similarity and analogy has introduced questions about human reasoning which cast doubt on assumptions about expectation, induction, causation and probability. Hume famously considered the likeness of eggs, but the likeness of melodies, themes, harmonic patterns, rhythms and so forth is, I argue, more compelling because it carries with it the visceral dimension that is shared between musicians and intrigues analysts.
In considering Bach’s fugue in Ab major, I start from the perspective of an early champion of Shannon’s work, the cybernetician and psychiatrist Ross Ashby. Ashby argued that:
“The principle of analogy is founded upon the assumption that a degree of likeness between two objects in respect of their known qualities is some reason for expecting a degree of likeness between them in respect of their unknown qualities also, and that the probability with which unascertained similarities are to be expected depends upon the amount of likeness already known.”

The Bach fugue presents a variety descriptions of different aspects of the music, where each description considers the counting of and distribution of particular features (rhythm, melody, intervals, etc). For example, how is (a) the same as (b)? Each description exists in the context of other descriptions; each description constrains other descriptions (e.g. the description of rhythm is constrained by the description of dynamics or pitch). Moreover, the identification of similarity within each description entails assumptions about the degree of likeness in “unknown qualities”. As the music unfolds, some of these assumptions about unknown qualities will be revealed to be errors, causing continuous reassessment of what counts as the same and what doesn’t.

Friday, 14 October 2016

Scientific Communication, Information and Music

In discussing the problems of scientific communication and the pathologies of education, there are three fundamental distinctions which are important to draw. They are:

  1. The distinction between IS and OUGHT in arguments about scientific communication 
  2. The distinction between an EXPLANATION and a DESCRIPTION 
  3. Issues about ONTOLOGY and INFORMATION 

I want to discuss each of these in turn, and then to draw on a musical example to illustrate the issues further.


I have begun to see the pathologies that we have in education and publishing as a direct consequence of failures in scientific communication. The challenge is to describe the ontological mechanisms. Essentially I aim to describe how scientific communication should be conducted in the light of what we know about our science. I do not want to say how it 'ought' to be.

Hume's famous passage in dealing with the dichotomy of "is" and "ought" is worth reflecting on:
"In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it." 

His complaint is about slippage from "is" to "ought" (he does not deny the possibility of deriving an ought from an is - the logical positivists misrepresented him). In my argument about scientific publishing I have tried to be careful in avoiding 'oughts' and ground an argument for a richer embrace of technological expression on the basis of describing how today's science is. I'm arguing (not much differently from David Bohm whose work on communication is new to me) that the nature of the science entails the need for new practices of communication.

There is a critical dimension (which I don't think is an Ought - it's just a warning): if we continue to communicate in the way that we did in the 17th century, then our communication won't work because it works against the scientific ontology. I'm speculating that this pathology feeds into financialisation processes which produce social crisis. In Hume's argument, communication between scientists and an ontology of regularity were tied together; now we have have to admit multiple contingencies in our scientific practices, the communication cannot be unchanged - can it?


Universal explanation is a common trait of scientific endeavour. This is clearly a very deep issue, but it fundamentally concerns our conception of causation. What is causation? What is causal explanation? For Hume, causal explanations are constructs produced in discourse (i.e. communication) between scientists in the light of regular successions of events produced in experiments. However, it is also worth considering that Hume was deeply sceptical about the articulation of any rational foundation which could underpin the production of regularities in nature. That cast doubt on assumptions about inductive reasoning (and for anyone who would champion Peirce's 'abduction', I think it suffers from the same problem at a different level)

Scientists certainly produce totalising explanations, cosmologies, etc, and these can be very useful to organise discourse and scientific activity, and also creating a sense of hubristic excitement which moves things on. But whilst universalist claims will be made, all we can safely say is that it is a "description of understanding". Scientific communication occurs when different scientist's "descriptions of understanding" coincide. I prefer to think of this as a recognition between scientists that they operate within related or shared constraints. We should inquire into the conditions when this happens. To describe phenomena, and one's understanding of phenomena is to reveal one's constraints. Describing doubt is a very important part of this. Explanation is to attempt to remove doubt - not just of the explainer, but of those they wish to convince.


Loet spotted a constraint in my understanding about redundancy and made an intervention which has (this time - sorry for not getting it until now!) really clarified things, and also opened up a connection between ontology, information and redundancy. Essentially, to calculate the redundancy one must have the maximum entropy, and the maximum entropy can only be gained from what Loet calls the "specification of the system": that, in my understanding, is an agreed ontology of what the system IS. 

I think this makes the relationship between Shannon information and redundancy recursive. In order to agree the ontology of the system, one must communicate; in order to communicate we must agree the constraints; in order to apprehend constraints, we must identify the redundancy... which can be identified through the maximum entropy, which entails agreeing an ontology. And so on. This makes me think my intuition about the importance of Lou Kauffmann's work isn't wide of the mark. 

Information appears like a recursive version of Wittgenstein's duck-rabbit, where there is a smaller duck-rabbit inside the larger duck-rabbit.

Of course, it is impractical to go to these recursive depths. Shannon's equations constrain us to a simple empirically observable domain. But I think it is important to recognise that the recursion is there, and that we are effectively 'cutting into it' (or constraining it) It may be that the point hangs on the identification of analogy, or identity: of what is counted as "the same as" or "another one".


I'm preparing a video to explore this which uses a musical example. I'll try and explain in text what I want the video to explain (you will at least have two descriptions!): Music analysts identify those features in a score or some other record of performance which are "the same" and "another" and produce their analyses which show how different combinations of categories change over time. But when we listen to a piece of music for the first time, we know little of what is about to come, except that our expectations are shaped as the music unfolds. What emerges over time is a multiplicity of what might be called "descriptions" (although they need not be verbalised, they can be expressed analytically to some extent). These concern many different dimensions of what we hear, including:

  • the rhythmic patterns 
  • melodic patterns 
  • timbral patterns 
  • dynamics (loud and soft) 
  • phrasing 
  • pitch 
  • intervals... and so forth. 

Each description exists within constraints which are partly produced by the other descriptions, and by other factors (like, for example, one's familiarity with the style). As the music unfolds, new descriptions (about form, climactic moments, harmonic progressions, etc) emerge and whose constraints will interact with (and transform) existing constraints - even (most powerfully in music) our emotional constraints.

I mention music because it is a form of communication which is extremely powerful and which does not make any external reference. Yet it tells us something about how we communicate, but there is an analytical puzzle here. The specification of the system is beyond reach, yet we sense the patterns, the repetition, the redundancy without having a sophisticated way of calculating it. We also identify that what we might consider to be "the same" at one moment in one context, we might later count as being fundamentally "different" in another (e.g. perhaps the same melody with a different harmony). Moreover, I suggest that at these moments of seeing something to be different that we once thought to be "the same" are moments of gaining deeper insight into the meaning being conveyed. My deepened understanding of the relationship between redundancy and the "specification of the system" explained by Loet is an example.

This, it seems to me, is the essence of what happens when we really communicate. The process, I suggest, is an emergent interaction of constraints. It requires multiple descriptions. As long as we attempt to convey singular descriptions in academic papers alone, communication in this sense is going to be very difficult - if not impossible.

Friday, 30 September 2016

E-learning and Scientific Communication

With the technologies that we have today, it is possible to communicate on a large scale in a much richer way than has ever been available to us before. Fundamentally, the power of our new media has to do with its potential variety of expression, or (more technically) its maximum entropy (the maximum possible surprisingness which can manifest itself though the medium). Text - particularly the text of academic papers - has a much lower maximum entropy.

I mention academic papers because I find it strange that academics remain transfixed by the academic paper as the 'gold standard' of intellectual communication. There are important reasons why it ought not to be. Not least among them is the fact that today's science is not the science of certainty and objectivity for which the academic paper was originally conceived as a means of communicating science by the academic societies in the 1660s. Today's science is a science of contingency, complexity and uncertainty. Communicating uncertainty through a medium designed to communicate certainty is surely going to lead to problems. And indeed it does... the 'marketisation' of education may be the most devastating manifestation of this epistemological misalignment.

With video, one can express one's uncertainty - which, in a science of uncertainty, is a very important thing to communicate: in the end, the point of communicating science is the coordination of understanding and action. As part of the FIS discussion ( academic publishing, I produced this:

The point of making a video is trying to convey honestly the uncertainties of knowledge and understanding. It is important to use a communication medium which affords this. The academic paper encourages people to hide, posture, and so on. Our educational market encourages people not even to care about 'communicating' but merely to posture, and acquire the status markers of publication. In the FIS discussion, a number of people have expressed pessimism about "human nature" in the sciences - that the ego-driven posturing will always win out. But I can't help wondering if this ego behaviour wasn't the product of the means of communication (the paper) as well as the epistemological model. If scientists used a more revealing technology to communicate, we would see, I think, different kinds of scientific behaviour.

Another important reason for thinking about scientific communication is that it is scientific communication which Universities are fundamentally about. In recent years, this has been forgotten - even in the most elite institutions. The market-driven focus is now on teaching students, with endless speculation about the 'best' pedagogy (whatever that means - it is all speculation, because nobody can see learning). So, we end up in a very confused place. "Teaching" in universities involves preparing people for the labours of scientific communication - which still means academic papers, conference presentations, etc... even when the science and the epistemology now concerns uncertainty and complexity. Educational technologists are enlisted to attempt to produce resources that encourage learners to develop themselves in ways which turn them into copies of the 17th century enlightenment scientist. This is a bit crazy.

The universities of the 1700s changed fundamentally within the space of 100 years or so (Bacon's "Advancement of Learning" of 1605 castigated Cambridge's curriculum, and by the 1700s, its Aristotelian ways had pretty much disappeared). What changed them? It was the transformed practices in experiment and communication among scientists, from the invisible college to the Royal Society.

Our universities today are in a mess - this is a very bad time in education. University managers think they can determine the future of Universities. But in the end, the future of Universities is always led by scientific communities. When those communities change the way they communicate then everything else in the education system changes alongside. I believe much of what we consider typical of a University today will have disappeared in 100 years, just as the once-unquestionable supremacy of Aristotelian doctrine in the scholastic university was swept away. The abandonment of the academic paper (certainly in its current form) and the adoption of new ways of communicating uncertainty will lead the way in this.

The reason why I think this will happen is because our epistemology of uncertainty cannot successfully communicate itself through a low-variety medium. It demands richness, aesthetic power, and emotional connection. The Newtonian, Lockean doctrine of the scientist as dispassionate observer cannot be right; complexity science will eventually disarm it.

There are some simple questions to ask: Do scientists really communicate with one another today? Is citation an adequate indicator of how well we understand each other? Are conferences any better for scientific communication? (I'm sorry, your time is up - you have to stop). If papers and conferences are no good for scientific communication, what actually works? What can we do better?

Probably as a first step, we have to realise that science isn't possible without communicating. 

Sunday, 25 September 2016

Big Data and Bad Management

There's been a lot of stuff in the news recently about the threats posed by Big Data, AI, etc. "Computers will take our jobs!" is the basic worry. Except nobody seems to notice that the only jobs that seem bullet-proof are those of the managers who determine that other peoples' jobs have been replaced with computers. It is bad management we should worry about, not technology.

No computer is, or will ever be, a "match" for a single human brain: brains and computers are different kinds of things. Confusing brains and computers is an epistemological error - a "mereological fallacy" (the reduction of wholes to parts), a Golem-like mistaken belief in the possibility of 'mimesis'.

Ross Ashby, who studied brains closely for his entire career, was aware that the brain was a highly effective variety-absorbing machine. Its variety reduction is felt in the body: often as intuition, instinct or a 'hunch'.

Computers, by contrast, count. They have to be told what to count and what to ignore. In order to get the computer to count, humans have to attenuate the variety of the world by making distinctions and applying them to the computer's software. If the computer does its job well, it will be able to produce results which map uncertainties which will relate to the initial criteria for what can be counted and what can't. Knowledge of these uncertainties can be useful - they can help us predict the weather, or help translate a phrase from one language to another. But it is the hunches and instincts of human beings which attenuate the computer's world in the first place.

Stafford Beer tells the story of Ashby's explanation for accepting without a moment's hesitation the invitation to move to the US and work with Heinz von Foerster in Illinois. Ashby explained to Beer:
Years of research could not attain to certainty in a decision of this kind: the variety of the options had been far too high. The most rational response would be to notice that the brain is a self-organizing computer which might be able to assimilate the variety, and deliver an output in the form of a hunch. He [Ashby] had felt this hunch. He had rationally obeyed it. And had there been no hunch, no sense of an heuristic process to pursue? Ross shrugged: ‘then the most rational procedure would be to toss a coin’
Our biggest threat is bad management, which feeds on bad epistemology. The great difficulty we have at the moment is that our scientific practices of Big Data, AI and so on, are characterised by complexity and uncertainty. Yet we view their outputs as if they were the 'objective' and 'certain' outputs of the classical scientist. Deep down, our brains know better.

Tuesday, 20 September 2016

Status Scarcity and Academic Publishing

Update: 25/9/16: A more complete version of this post is here:

A published academic paper is a kind of declaration: the board of such-and-such a journal agrees that the ideas expressed in the paper are a worthy contribution to its discussions. It is, in effect, a license to make a small change to the world. Alongside the license comes other prestige indicators which carry real value for individuals: in today's academia, publications help to secure the position of academics in universities (without them, they can lose their jobs). Beyond publication itself, citations serve as further 'evidence' of approval of a community. Fame and status as a "thought leader" comes from many citations, which in turn brings invitations to keynotes at conferences, impact of ideas, secondary studies of an author's ideas, and so on. Fundamentally, there is a demarcation between the star individual and the crowd. Publication counts because it is scarce: approval for publication is a declaration of scarcity.

Publication in some journals is more scarce than in others. The less the probability that a paper might be accepted for publication in a journal, the greater the status associated with that journal.  High ranking journals attract more citations because they are seen to be more authoritative. Journals acquire status by virtue of their editorial processes and the communities they represent. The scarcity declarations made to an author reflect and serve to enhance the journal's status.

With scarcity comes economics. Access to published work in high ranking journals has a value greater than work published in less highly ranked journals, or work published for free. Since academic job security is dependent on acceptance by the academy, and since the means of gaining acceptance is to engage with the scholarship in high-ranking journals, publishers can demand a high price for access to published work. This is passed on to students in Universities, and access to intellectual debate is concentrated within Universities whose own status is enhanced by their position as a gateway to high ranking scholarship.

Moreover, Universities employ academics, who they expect to be publishing in high-ranking journals. The status of individual academics is enhanced through publication in high-ranking journals, and the status of journals in enhanced by their maintenance of scarcity of publication, the University declares scarcity in the access both to well-published academics and to high-ranking journals. Successful publication increases job security because it reinforces the scarcity declaration by the institution.

A third layer has recently emerged which reinforces the whole thing. The measurement of status through league tables of universities and indirectly, journals has introduced an industry of academic credit-worthiness into which institutions are increasingly being coerced to submit themselves. To not be listed in league tables is akin to not being published in high ranking journals.

In the end, students and governments pay for it all. The money is split between the Universities and the publishers.

The problems inherent in this model can be broken down into a series of 'scarcity declarations':

  • The declaration of scarcity of publication in journals for authors
  • The declaration of scarcity of access to journals by institutions
  • The declaration of scarcity of status of institutions through league tables
  • The declaration of scarcity of intellectual work within the universities

How has this situation evolved over history? How has technology changed it?

Before the Royal Society published its transactions (generally considered to be the first academic journal), publication was not considered something that scientists ought to do. The publications of scientific discoveries was frequently cryptic: an assertion of priority of the individual, without giving anything away in terms of specific details of the discovery which might then be 'stolen' by other scholars. So Galileo's famous anagrams were a way of making a declaration that "Galileo has made a discovery" without necessarily saying what it was.

The possession of knowledge was the key to enhancing status in the medieval world - so scientists became 'hoarders' of knowledge. It is perhaps rather like some university teachers today who might be unwilling to have their lectures videoed: that if their performance in class was captured in a way that could be infinitely replayed and reused, their jobs would be threatened because they would no longer be required to lecture. Equally, many academics today are resistant to blogging because they don't want to 'give their ideas away'. The medieval scholar was much like this.

In an age of printing, knowledge hoarding became increasingly difficult to defend. To enhance one's own status within an institution increasingly necessitated reaching out to a larger readership in other institutions. Publication practice gradually took on the form that we now know it. One of the best examples is the Royal Society's publication of its history (two years after its foundation!). This received considerable and well-documented bureaucratic processes and editorial control: the 'history' was a declaration of the institution's status itself, and it sought to preserve its own distinctness.

The  contrast between the Royal Society's practices of peer review was a change not only in scientific practice and epistemology, but also in the democratisation of intellectual status acquisition. Publication and admittance to the academy was technically available to all. The status of observation and experiment supported the democratic movement. The noteworthiness of the experiment and its results were more important than the status of the individual. Science was the gateway to truth - the uncovering of certainties in nature. We tend to see this epistemological shift occuring alongside the shift in communicative practice. But fundamentally the technologies of communication and the scientific epistemology were probably interconnected - the technology brought about new epistemologies.

This is an interesting perspective when we come to the internet. If we live in what some call an 'information society' is it a surprise that information frames a new scientific epistemology? The contrast between our information world and the world of the Royal Society is the certainty that was assumed to lie behind scientific discovery. Uncertainty rather than certainty is the hallmark of modern science - whether it is the probabilistic modelling of economics or patterns in DNA, the analysis of big data, the investigation of quantum fields or the study of ecologies. And information itself is, at least from a mathematical perspective, a measure of uncertainty. So we move from the certainty of the Royal Society and the democratisation of the academic publication, to the uncertainty of information science and yet we retain the publication model of the 17th century.

This publication model is in trouble. Journals struggle to get reviewers, publishers have become over-powerful, education is increasingly unaffordable. Meanwhile Universities have adopted practices which have reduced their running costs, employing cheap adjunct lecturers who can barely afford to eat, whilst increasing their revenues. Consequently the ecology of scholarship is increasingly under threat. It is curious that in a world where knowledge is abundant, universities have maintained their scarcity (evidenced by rapidly rising fees), and publishers - whilst coming under attack for their practices - largely operate with the same models that they did in the 18th century. These are all signs of education in crisis.

There have been attempts to address this crisis. In the early 2000s, the realisation of the technological abundance of knowledge suggested that it might be possible to bypass the institution altogether. Guerrilla tactics to open up closed journals have appeared, with Sci-Hub being the most famous example. New models of peer-review have been introduced, and new models of open access publishing. But as one part of the status problem is addressed, so a different aspect on the same problem opens up: open-access publishing is often little more than the opportunity for an author to buy increased chances of citation.

But the journal paper itself seems outdated. Video appears to be a much more compelling case for advancing intellectual arguments and engaging with an audience. Why do we not present our ideas in video? On YouTube it is artists rather than academics who have harnassed the power of video for coordinating understanding. An uncertain world requires not the presentation of definite results and proof, but rather the determination and coordination of the constraints of understanding. In an uncertain world, knowledge and teaching come together. Then there are other means of coordinating understanding through online activities.

Sunday, 18 September 2016

Student Rent Strikes - Revisiting the political power of an un-mortgaged society?

Inspecting the looming world of financialised housing in 1959, Aneurin Bevan gave a speech to the Labour Party conference:
I have enough faith in my fellow creatures in Great Britain to believe that when they have got over the delirium of the television, when they realize that their new homes that they have been put into are mortgaged to the hilt, when they realize that the moneylender has been elevated to the highest position in the land, when they realize that the refinements for which they should look are not there, that it is a vulgar society of which no decent person could be proud, when they realize all those things, when the years go by and they see the challenge of modern society not being met by the Tories who can consolidate their political powers only on the basis of national mediocrity, who are unable to exploit the resources of their scientists because they are prevented by the greed of their capitalism from doing so, when they realize that the flower of our youth goes abroad today because they are not being given opportunities of using their skill and their knowledge properly at home, when they realize that all the tides of history are flowing in our direction, that we are not beaten, that we represent the future: then, when we say it and mean it, then we shall lead our people to where they deserve to be led!
One of the most interesting things about the property boom and the mortgage crisis is that few young people can afford to take a sufficient mortgage to buy a house. Of course, this deprives the money-lenders the opportunity to control the young and tie them into financial servitude for 25 years or more. Although for the young who believe they deserve the same standard of living as their parents (but can't get it), this may seem terrible, it also presents provides the young with political power - which they have yet to realise.

The atomised mortgaged property-owning individual was (is) politically disenfranchised not only through the mortgage itself, but also through their impaired ability to organise themselves into a political force. The collapse of heavy industry, and the unions which were once so powerful meant that there was no single target to strike collectively to hold elites to account.

So heavy industry has gone. Massed labour has gone.... to be replaced with mass university education. The student rent-strike (see is exactly the same kind of phenomenon as the organisation of mass political power in the past. Rent hurts students on a day-to-day basis. It means they can't eat properly or go out in the evening. I think the rent strike is likely to succeed - in London, it has already started to show results ( Of course, Universities may threaten legal action, etc. But against everybody? I doubt it - there are too many vested interests in the students being there - and Universities without students aren't Universities. The interesting thing is, if the rent strike is successful, what next? What, when students rediscover the power of self-organisation and political action, will be next?

What about a "fees strike"? This is more difficult. Fees are paid through loans taken out by the student, directly to the University. The student never sees the money, and have no power to withhold it. All they can do is leave, which would also mean not getting their qualifications. I'm not entirely sure that mass exodus as a political threat is completely out of the question (who knows - particularly with dwindling prospects for graduates, and the fact that a student who's studied for a year knows that the rest is more of the same), but the question over rent will raise a lot of questions not just about student finance, but social power.

Friday, 9 September 2016

Gordon Pask: "A Discussion of the Cybernetics of Learning Behaviour" (1963)

At #altc this year (which I didn't attend) there was a keynote given by Lia Commissar (@misscommissar) about the brain and learning. By coincidence I stumbled across a volume in Stafford Beer's archive in Liverpool, edited by Norbert Wiener on "Nerve, Brain and Memory Models" from 1963. It followed a Symposium on Cybernetics of the Nervous System at the Royal Dutch Academy of Sciences in April, 1962. There is a stellar list of contributers:
W. R. Ashby, V. Braitenberg, J. Clark, J. D. Cowan, H. Frank, F. H. George, E. Huant, P. L. Latour, P. Mueller, A. V. Napalkov, P. Nayrac, A. Nigro, G. Pask, N. Rashevsky, J. L.Sauvan, J. P. Shade, N. Stanoulov, M. Ten Hoopen, A. A. Verveen, H. Von Foerster, C. C.Walker, O. D.Wells, N. Wiener, J. Zeman, G.W. Zopf
There is a long paper by Gordon Pask called "A discussion of the cybernetics of learning behaviour" which I thought would be relevant to the current vogue for everything 'neuro' in education. There are many other things there too, including a fascinating paper by Ashby and Von Foerster on "The essential instability of systems with threshold, and some possible applications to psychiatry". There is also a record of the conversation with Wiener afterwards. 

I've quoted the opening of Pask's paper below because it is an excellent summary of the neuroscience of the time. It was surprisingly advanced, and in many ways today's emphasis on MRI scanning technologies has meant that the field has become somewhat homogenised. One of the reasons why I'm interested is because the models of the brain taken by Stafford Beer in his Viable System Model very much belong to this period: what effect would more up-to-date understanding of the brain have had on his thinking? (I'm investigating this with people in Liverpool medical school).  

But Pask's contribution on Learning Behaviour is also interesting because it presents a very early (and rather formal) version of what became conversation theory. He relies quite heavily on Robert Rosen's work ("Representation of biological systems from the standpoint of the theory of categories" (1958) - Bulletin of Mathematical Biophysics; "A logical paradox implicit in the notion of a self-reproducing automaton" (1959), same journal). His championing of Ashby's approach to the brain is, I think, very important.

From "A discussion of the cybernetics of Learning Behaviour" - Gordon Pask, 1962

1.2 The approach of cybernetics

Some cybernetic models are derived from a psychological root, for example, Rosenblatt's (1961) perceptron and George's (1961) automata stem largely from Hebb's (1949) theory. Others, such as Grey Walter's (1953) and Angyan's (1958) respective tortoises, have a broader behavioural antecedent.

On the other hand, neurone models, like Harmon's (1961) and Lettvin's (1959), are based upon facts of microscopic physiology and have the same predictive power linked to the same restrictions as an overtly physiological construction.

Next, there are models which start from a few physiological facts such as known characteristics or connectivities of neurones and add to these certain cybernetically plausable assumptions. At a microscopic level, McCulloch's (1960) work is the most explicit case of this technique (though it does not, in fact refer to adaptation so much as to perception) for its assumptions stem from Boolean Logic (Rashevsky, (1960), describes a number of networks that are adaptive). Uttley (1956), using a different set of assumptions, considered the hypothesis that conditional probability computation occurs extensively in the nervous system. At a macroscopic level, Beurle (1954) has constructed a statistical mechanical model involving a population of artificial neurones which has been successfully simulated, whilst Napalkov's (1961) proposals lie between the microscopic and macroscopic extremes.

Cyberneticians are naturally concerned with the logic of large systems and the logical calibre of the learning process. Thus Willis (1959) and Cameron (1960) point out the advantages and limitations of threshold logic. Papert (1960) considers the constraints imposed upon the adaptive process in a wholly arbitrary network, and Ivahnenko (1962) recently published a series of papers reconciling the presently opposed idea of the brain as an undifferentiated fully malleable system and as a well structured device that has a few adaptive parameters. MacKay (1951) has discussed the philosophy of learning as such, the implications of the word and the extent to which learning behaviour can be simulated; in addition to which he has proposed a number of brain-like automata. But it is Ashby (1956) who takes the purely cybernetic approach to learning. Physiological mechanisms are shown to be special cases of completely general systems exhibiting principles such as homeostasis and dynamic stability. He considers the behaviour of these systems in different experimental conditions and displays such statements as 'the system learns' or 'the system has a memory' in their true colour as assertions that are made relative to a particular observer.