oturn home > Information Systems – Fundamentals and Issues: overview > Ch. 2: Information

Information Systems – Fundamentals and Issues
©1990-2006 John Lindsay, Kingston University, School of Information Systems

Chapter 2: Information

Between Claude Shannon's Theory of information[1] and Anthony Smith's Politics of information[2] there needs to be a very long bridge. It is rather unfortunate that the same word has been used for such a disparate set of meanings. When combined with knowledge, intelligence, communications, systems, and design we are set for the maximum amount of confusion possible, for these are the portmanteau words of the late 20th century, the words into which we put all processes which can't be classified more precisely[3]. Each of them has an historic meaning, and has travelled a path. It is worth following that path in a little detail.

Information has just celebrated its six hundredth anniversary as a recorded part of the English language[4]. In 1386 Chaucer, no less, used it: "Whanne Melibee hadde herd the great skiles and resons of Dame Prudence, and hire wise informacions and techynges". The word is the active noun of "inform" which underwent an interesting shift from "without form" to "to give form to". "The action of informing, communication of some knowledge or news of some fact or occurrence; the action of telling or fact of being told something". "Knowledge communicated concerning some particular fact, subject or event; that of which one is apprised or told; intelligence, news." It also developed a specific legal meaning, "the action of informing against, charging or accusing (a person).

This all seems clear enough. It is then a pity that Shannon used the word to mean (* get quote). When Anthony Smith uses the word, he is talking about the relationship of power between the controllers of the mass media, particularly television, and their audience (which means that hearers should be replaced by viewers). Because there is no feedback mechanism, that use of the word specifically ensures we have no knowledge of whether informing has taken place at all or not.

Information follows data it would appear[5]. Data is the plural of datum, "A thing given or granted, something known or assumed as fact and made the basis of reasoning or calculation; an assumption or premise from which inferences are drawn."

There appears a general sense that data is "raw information" or "information without context", information is "data with context" or "data with meaning".

Galliers for example defines "information as that collection of data, which, when presented in a particular manner and at an appropriate time, improves the knowledge of the person receiving it in such a way that he/ she is better able to undertake a particular activity or make a particular decision"[6].

Nichols says, "Information must (his italics) possess the first three attributes - relevance, availability, timeliness - to have value, and thus to qualify as information. Objectivity, sensitivity, compatibility, conciseness, and completeness are desirable, but they are present and necessary only in varying degrees"[7].

Peter Drucker says, "Information is data endowed with relevance and purpose. Converting data into information requires knowledge"[8].

I'd like to argue that this distinction is not helpful. Data and information are views of the same objects from different perspectives, and contain certain complexities which give the "signal" "meaning". I'd like to run through a few examples of what I mean.

An example

Let us start at the very beginning, with a point, "which has position and no magnitude". This point is the position of a cursor on a digitising tablet, of a cursor on a screen or of a dot on a dot matrix or ink jet printer for example. The position is defined by the co- ordinates and the resolution of the device. Already there is data, information and context.

According to a second level of meaning, that defined by the system, the co- ordinates in turn relate to a real world entity instantiation for example on the digitising tablet the co- ordinates of the copper membrane map onto an overlay which define the co -ordinates as the centroid of an enumeration district for example.

This point could in turn have an address - a post code or a building, or a city. The point, according to the scale within which the real world entity type and the system entity type map onto one another, could vary from the atomic to the astronomic. Within the sphere of Geographic Information Systems, we are probably concerned from the millimetre to the kilometre - from an electric cable to the significant land mass. (Though other systems based on GIS technology might be concerned at the biological, cellular or molecular, down to 10[-10 ]then from the spatial up to the astronomical, say 10[10]..) A third layer of data, information and context.

A second point produces a line, a third a polygon, and a fourth a more complex polygon, including a third dimension. The line can be a street centre, a property boundary, the polygon an externality - three streets, or an internality, a property. That polygon may have a centroid, calculated by an algorithm, an area, calculated by another algorithm. The centroid, a point not created by digitisation but by calculation, could in turn be the statement of an address. The line could contain flow or capacity, for example in a pipe, calculated by another algorithm from an address contained elsewhere. Two points might define the ends of a set of points, but according to another address the contained points, which might represent far from a straight line, will be represented by a straight line, because that is the content of that address at that level of aggregation.[9]

The polygon might at one moment be a property, but at another an enumeration district, the point at one corner might be at one moment the boundary of a property, the next of an enumeration district, the next of a city. The externality of two polygons might be the area of an enumeration district which is not in a post code district. This would have been calculated from a combination of an algorithm and a library of addresses, which we might call a procedure.

Some of these polygons could be stored as special procedures in libraries, from which they could be called up, as "houses" or "doorframes" or "dwelling for low income family" or "plotsize". These procedures in turn could be linked together according to the level of aggregation or desegregation that a particular operation concerned itself with.

A special form of "point" would be the set which produces the alphanumeric character set. These would be "understood" by the computer on the basis firstly of a set of codes, ASCII for example, which govern their representation, then by a set of protocols which govern their handling (say for example a word processor which allow point size, bold, indexed, find, replace, spell check), or a set of protocols called algorithms or functions which handle their processing (say add, divide, square root, area, flow, greater than).

A more complex set of protocols would possibly define them as legitimate "words" from a dictionary, or capable of being parsed, as a result of which an approximation to natural language could be attempted. Some of these "sentences" could in turn be grouped into messages which could be called up as procedures, or "forms", for example planning application pro formas, which would link to committee agendas, to minutes and to site visits. Parts of these forms would produce fields of entity occurrences which would provide dates or times or places, which would take us back to the addresses of the earlier points. An address is at another level simply a set of fields within a database, but one of those fields could be a unique land identification number, which could link to the set of co- ordinates which defined that property, which linked to another database which defined the unique property identification number, which in turn located the taxable value or the owner, which in turn could link to family data and so forth.

I'm presuming that the concept of normalisation and of tables are now common knowledge. Relational databases were in their time a step forward on hierarchical files. But the consequence of separating the data from the operations to be performed on it led to the possibility of nonsense occurring. The advantage of linking data sets and the operations to be performed on that data, as objects and object types with addresses is that not only can nonsense not be operated on data, but groups of sets of objects can be composed and decomposed at meta levels of the system.

In turn there is no reason why an object cannot contain a set of rules - conditions under which a set of procedures apply:

If the PROPERTY is in ENUMERATION DISTRICT N then check whether it is a LISTED
BUILDING. So we have entities called PROPERTIES and ENUMERATION DISTRICTS.

These have co- ordinates and relationships to addresses and owners. We have conditions, such as LISTED BUILDING which in turn have other rules such as the conditions which define a listed building, or rules on what an OWNER might not do to a listed building. Or these rules can trigger state transitions such as:

If the APPLICATION has not been returned within n days then initiate COURT PROCEEDINGS.

A COURT PROCEEDING could in turn check whether that OWNER (for an APPLICATION can be returned only by an OWNER who must have a PROPERTY and all this is implied from the rules that link the objects at the meta object level) had other proceedings out against him. APPLICATIONS and COURT PROCEEDINGS are in turn objects which contain text which produce forms, fields, which draw data from data sets and initiate procedures.

There are now software packages which are capable of integrating these entity- relationship, object oriented and rule based approaches. They are still however quite expensive and tend to need to run on fairly expensive equipment. There are however packages which can achieve parts of this general picture, which can be integrated at the design level if the designer has a grasp of the general approach. Experience indicates that the drop in price of software and hardware is such that packages capable of producing usable results will be soon available. One must not wait for the desirable kit to begin the planning process.

What lies behind all this rather long winded detail, is that the core digitised map data set, the set of co- ordinates by which features are recorded has the potential of more views on the data than possibly any other form of data with which we have had to deal before. The sheer quantity of data generated means that it is more likely that data will be bought and sold in a market, but that the originator of the data might not have recognised the value of the market, or the range of possible uses. The greater the extent his data is tied to particular machines or processes, the less the likelihood that the market will be realised. In developing countries where the expertise is more rare and the capital equipment costs higher, the argument for avoiding waste by putting in place information plans which deal with the definition of these protocols and primitives and the functioning of this market is stronger.

Another example

Let us take another example, from retailing. At the point of sale the laser scanner has an item passed over it. The ANA barcode on the item is read into the local database, which records it as a can of baked beans, 16 oz, price 38p, and from elsewhere in the system that it is at checkout 8 at 3.30pm on Friday, 13th October at the Kingston Branch. Within that particular bundle of purchases was also a can of catfood and a packet of biscuits.

The little 0s and 1s start off with a higher level of meaning than the dots and the polygons. The barcode has been earlier constructed to mean a can of baked beans within a vast European classification of all baked beans and all cat foods and all biscuits and all other items which have been pulled into the world of the ANA. The barcode existing in the database of that particular computer at that time links it to an order which arrived on a particular truck at a particular time of the day at a particular price.

If the purchaser was using a credit card then from that transaction would give you also the name and address of the customer.

At a higher level of aggregation, you would be able to map the sale of baked beans against time within that store, or across stores. This in turn could provide a link backwards into the supply chain for either the placing of orders or the monitoring of the placing of orders. Mapped upwards across all goods sold in a day it would give you a balance of the amount of money available across the organisation for placing on the money market (rather than giving it to the bank for a few days!).

The can of baked beans could wear a little clock on which the cost of purchase in bulk from the manufacturer and the cost of all the handling to get it there was marked up against the selling price. Then as it used up space and heat and light and cleaner's wages the clock could run down, until the manager knew that he was losing money just by holding it.

And because each good has a different shape (where shape is an object of meaning "involving handling, shelf-space, longevity" they could all wear little clocks from which the setting of prices could vary in an interesting way, just the way the barrow boy in the market knows when he has to drop the price of his lettuces.

If a marketing campaign had placed television adverts in one region but not in another, an attempt could be made to measure whether stores on the boundary of the catchment area showed any significant difference. If you had a model of the type of people you were aiming your product at, you could take the enumeration level data from the census to model to the level of about 150 families the catchment area, from a database of available properties the desirable in terms of size and location, mapped against transport facilities and optimise. In each case the base level data starts off as a 0 or 1, with the smallest unit of associated meaning, and is built up by various users contributing meaning according to context. Each user in turn develops a view on a particular object of information according to that individual's purpose.

It is probably possible to estimate that something like 10[8 ]of these transactions take place every day in Britain. They are the lifeblood of every information system. The process of building databases and rules proceeds as in example 1.

Yet another

Let me take a third example. Go into the library and take out a book. Here is the same barcode, but now a different one. This barcode is the ISBN or the accession number, and the barcode on your library card is "you". The book is on the stock of the library and the link with the computer gives the author and title, the date and place and publisher. The book itself and the database gives a classification number and a shelf number - an ordering like where you found the baked beans but different. Look on the back of the title page of your book (or this one) and you will find British Library or Library of Congress Cataloguing in publication data giving Dewey number and Library of Congress Classification number as well as subject heading. The catalogue entry might have an abstract, to give you more of an idea of the subject of the work or whether it is appropriate to your requirement, or the flyleaf might have a synopsis.

The very existence of the book, as well as the formalism of author and title, not to mention the table of contents, the marks on the paper, the index, the fact that you can read it and that the entry in the library catalogue probably was not created by the librarian you know and dearly love, but came down a copper pipe, just like the price of the baked beans, from another database (which is part of the British National Bibliography, from which comes a vast compendium called Books in Print (now available also on CD-ROM)), places this little object into a world of meaning where the marks on the page mean what they mean because of the object in which they are contained. It is even possible that the whole book is now on CD-ROM and you can borrow that instead. Or it isn't even here as a point in space, it is on a machine somewhere and you can just log into it.

It remains strange that once there was a library catalogue, a matter about which it was difficult to get excited, but now there is the database!

Even then there were protocols and primitives. There was the Anglo American Cataloguing Rules, by which the filing order of Chinese family names could be decided upon. There was the Dewey Decimal Classification System, by which it was ordained that religion took up ten per cent of the schedule, and within that Christianity took ninety percent, the books of the Bible unto the nth generation of Titus, and the states of the USA, each one named and numbered, yet it was said that there was nothing ideological or political about classification schemes - neutral and value free, espousing the status of a science.

And the "you" on your library ticket, when the barcode is scanned, pulls up from the database your name and address, how many books you have overdue, and whether you will be allowed to borrow this one. Logically you should be paying for borrowing it, just like buying the baked beans. But to that we'll return.

In other words information is a social and organisational process, not a consequence of some characteristic of the "thing itself".

This seems to be at variance with another level of information processing, that which goes on within the thing itself. How the human brain handles information, or stimulae, and converts that stimulus into an action or how it stores memory is the subject of a substantial literature, and some disputation.

Is this the same thing as thinking? What do we mean by thinking?

Down to the level of how the DNA code functions or how the seed of a geranium creates a geranium, at the atomic and the molecular - the 10[-8] of a geographic information system (for molecules can have a geography just like a planet), how information is stored and processed provides us with problems. We might argue that a litmus paper does not know that it has to turn blue or pink according to the acidity of its changed environment. In fact blue or pink is what we name the colour of the light reflected by that chemical at that pH. A thing can't know what a thing's gotta be. We might also argue that a geranium seed does not depend on information to be a geranium seed, or to flower. It is just doing what its combination of chemicals requires it to do. Nevertheless it remains a matter of fascination how the coding is organised.

The idea that the immediately post- molecular level organisms are capable of changing themselves, of acting strategically, presents us with more of a problem. Yet that is precisely what the AIDS virus and other retro- viruses as information processors appear to be doing.

One warning shot is that we have to be careful whether our metaphors actually take over the way we see the world and begin to inform (give shape to) what we are describing as the world. The strange idea of that which is being informed becoming the informer - models of understanding the complex - clockwork, factory, telephone exchange now computer - what is analogy, what metaphor? Similarly theories about the process of how human cognition functions has a whole area of activity.

The link between the internal processing of the human brain and the social processing of information is a set of protocols and primitives. Some of them are so common that they are common sense. Some are so common that they aren't common sense at all. They haven't always existed, they have a history. The idea of alphabetical order emerges in the sixteenth century with printers needing to store type in a way that letters can be found. The standardisation of spelling emerges in the following century with the lexicographers. Even times and dates have histories.

The physical world provides us with evidence too. This is the domain properly called science. Things behave independently of our understanding of them, yet that which we look for is located in our social construction. This is the matter of the debate between Popper and Kuhn.[10]

Numbers

Numbers are possibly the most primitive of primitives we have. From the first me, you, they, or one, too many, or the marks in the sand, counting seems to reach back into the earliest of what we know about human information processing. The development of the base ten system, (counting on your fingers or toes), the concept of 0, of fractions, of negative numbers, of imaginary numbers, the story is the story of the development of the complexity of people's need to socially process information.[11]

The Romans' marks - the minimum number of straight lines to make an infinite set, but with no concept of 0 - making division almost impossible; the Arabic use of curves to give a graphic representation of base ten and the concept 0; *(maybe another?) are among the earliest protocols by which that complexity was able to develop.

The development of the base two, 1 and 0, and its relationship with on and off or true and false, then the development of the base sixteen, the hexadecimal, to handle the more complex information processing capacity of advancing information technology.

The rules of association that make up addition, subtraction, multiplication and division, the axioms of algebra and geometry, these form such a basic part of education in western society that we must presume that every designer of an information system and every user takes numbers and number theory to be given truths.

Languages - words

The sounds we make too must be among the most primitive of primitives. However, while the Arabic protocol for the definition of the graphic representations of the base 10 approaches an international standard, nothing similar exists with either spoken language or its graphical representations. The Roman alphabet has a greater internationalisation than any other, yet probably less that half the world's population understand it to be able to read it or write it. For users and designers of information systems though we must presume that it, and a particular representation called "English" is the functioning standard. (Advances in computer processing give the capacity to reduce the cultural domination, a point to which we'll return later.)

The question of whether subjects, verbs and objects are as primitive is more deeply argued over, as is the question of the extent to which the capacity for language, rather than a competence in a particular language, is part of the neural capacity of the human brain. This harks back to the point made earlier about the molecular level of information processing.[12]

From the sounds and the graphical representations there develops a vocabulary, a list of legitimate sounds which will be understood by the participants of a particular community, and the protocols by which these will be joined together to give a set of meaningful utterances. The grammar or the syntax and the vocabulary are probably the most time consuming part of the process called "education".

This body of information called unfortunately "natural language" (for of course there is nothing natural about it - it is completely social) though it has a lexicon by which the definitions of the terms in the vocabulary may be related to one another, and a grammar by which legitimate structures may be coded, does not guide us towards meaning - the process whereby the intended act of communication authored by an individual is received by another or others (the reader) and is internalised to give the same intention as that of the author, eliciting a response which is decoded by the author to indicate that the meaning was comprehended.[13] Context, facial expression, tone, the position of furniture, are all part of the coding whereby sense, intention, feeling, all combine in meaning.

The formalisms of dictionaries and grammars themselves have a history - they developed at particular times and were authored by particular people.[14] We have however no grammar or vocabulary of context.

The graphical representation of the characters of the Roman alphabet is possibly the most densely packed protocol we have. To produce about twenty six primitives, to separate the sound from the meaning and the sound and meaning from the graphical representation was an act of brilliance, the author of which is unknown.[15] Cyrillic is a form of Roman alphabet differentiated for political reasons. Forms occurring in Arabic, most of the Indian languages, Thai, and so forth involve further problems which we will have to avoid here, with all the arrogance of someone whose mother tongue happens to be the one of the dominant party.

Within the formal language the smallest level above the letter is the phonemic, the character strings which represents the set of known sounds. Attempts to produce protocols for these has been less successful - there is no guarantied relationship between a particular character set and its sound independent of context. Attempts to produce an international phonetic alphabet have foundered - it is not even mapped onto ASCII.[16] This will produce a layer of complication later on.

The next layer is the morphemic, that at which a character string is mapped to an entity in a lexicon. Some contain meaning in themselves, some only in relation with others. "S" and "ing" have limited meaning, but "sing" and "singing", much, within English. To a certain extent the phonetic is transferable across "languages" but the protocol which defines a particular meaning to a particular character string beyond the level of the proto- Indo- European language provides us with the complexity of modern European languages and the major problems of what usually passes for translation.

The layers of words, phrases and clauses, sentences, sets of meaning carrying codes; they are wrapped up in journal articles, published in monographs; they have indexes, tables of contents, citations, ISBNs, entries in the National Bibliography, royalties, copyright and, holy of holy, film and television rights: their organisation and how they carry information, store it and how that information is retrieved fall to a later part of our journey.

But we must point in passing to a special sort of word, not just a noun, but a proper noun - the names of people and places. These require special protocols, first name, family name, birth name, given name, mother's name, father's name. Ordering them requires a special protocol in the AACR2 rules. Telephone directories obey another set. Beware the naive designer of the database who thinks that names are simple things.

Then the street names, the suburbs, the towns, regions, the countries, and that most luxuriant of names, the post code. I mentioned in the section introducing the primitives of a geographic information system something of the process by which these objects are built up. It is the amazing capacity of relating a person's name to a place name which will give us one of the keys to the information economy.

Barthes goes on to define a special grouping of characters, what he call a lexia, a grouping for which a special set of meanings may be constructed (or decoded)[17]. A clear understanding of his work would make a contribution to improvements in information retrieval. Combinations of words in sentences still have to be processed in a linear manner. Yet that is not how the brain works. Slowly we are moving towards being able to process text in a non- linear way. At the moment software called hypertext begins to point us in that way. In the mean time the best we have is the index.

Icons and ideograms

The representation of an ox's head as an A which was such a brilliant move, took a long time to evolve. In the meantime some languages remained iconographic, such as Egyptian hieroglyphics, and some developed a coding system where not a sound, but an idea was represented by a graphic, an ideogram, as we now find in Chinese, Japanese and Korean languages - Kanji.

Until recently these matters would have been of only passing interest to the major group of information systems designers, but recent developments in computing have forced us to return to them. Probably Apple's marketing of Hypercard will prove to have been the major factor, but iconic representations of meaning and the processing of ideograms is now presenting us with the need to extend our idea of the graphic representation of language to bring back the hieroglyph.

There is a reason however why the Egyptians finally abandoned the hieroglyph for the demotic - the limitation of the coding and ordering sequences which are the consequence of not being able to put in place the protocol of grammar.

That the ASCII code for kanji requires n,000 (think 5 - check*) and that every character requires a combined three keystroke, that a screenful of icons soon produces a jumble of possible meanings and limited mnemonics, indicates that there is a lot more work to be done before we can talk of a language of iconography. However some icons are enabling us to cross the limiting barriers of mutual incomprehension consequent on Babel.

The Highway code might be the most complex that we generally have to know. Already it attempts a "grammar" (if I'm not using the term too loosely - red triangles, red circles, blue for motor ways (but green in Europe), and so forth. Yet even it has little scope for expansion for already learning what has to be learned to pass the driving test, and being able to decode the icon while travelling at speed tests the borders of capacity of layers of the population.

Another complex iconic language is that of the Ordnance Survey for the basic land map system. It too has reached the limit of complexity, but limited also by the design conditions of what may be represented on a sheet of paper without confusion. Computerised mapping and the iconic link we'll come back to.

Some other domains, architecture, chemistry, have specialised systems, requiring long socialisation into the rules, vocabulary and grammar of their languages. The domain becomes too wide to dwell on further.

Semiology

There are signs along the route however. The discipline of semantics attempts to produce a vocabulary and a grammar by which we can talk about meanings contained in signs.[18] Character strings, numbers, words, are all signs, as are all icons. Yet signs go further to include facial gestures, the distribution of furniture in a room, of rooms in a building, of buildings and the spaces between buildings in a city.[19]

One special group of signs we'll call signals. This is the group which trigger state transitions. A traffic light changes colour and vehicles stop or move. A bank balance reaches a certain figure and a new rate of interest is applied. A date in a month occurs and wages are paid. These signs are so tightly defined to their context that one might almost be able to say that the set of signals is the map of the domain of events and processes within that system. A special group of signals is those which control processes and through feedback trigger decisions. These, in cybernetics and operations research are probably as far from the semiology of cinema as one could get.

The study of semiology leads us to the interesting recognition that different people see the world in different ways. The Mappa Mundi of Hertford cathedral, the Rosetta Stone and Gutenberg's Bible all contain representations not only of icons, but of world views. The whole of language is the way we put our worlds together. The problem for the information system designer is translating from one world to another, such that the meaning of both is not perverted in the process. To this I'll return later.[20] (In Chapter 5)

There is also the semiology of people in groups, of the games people play, and their grammar. This leads to an interesting problem. Once I know that you know that holding my arms in this way, or always using your first name, is a sign by which I indicate this or that, then by doing it I'm not imbuing it with the same meaning as I would had I known not. But do you know whether I know?

It is developments in computer graphics, beyond hypercard, which is going to make the integration of semiotics into information systems a matter of considerable interest. Complex languages can be built up where icons cultivate a hierarchical relationship with one another.

Semiology also has the virtue of making us aware that the systems of signs are social. Though the most recondite work has been done in the field of cinema, theatre, spectacle and the mass media, and appears a long way from designing information systems, it is useful to be reminded that much of what appears common sense is in fact socially constructed. If you were seeking to understand the inequality of the representation of women in the field of computers you could do worse than start here.

Vision and feel

The receiving of information through the eyes, not of lexical representations of numbers, but of the world around us, and our ability to change that world by touching it is an area so complex, yet outside the major activities of information systems design. In robotics there is the beginning of how we might be moving, and to that I'll return later. Similarly how the brain processes visual and tactile information is a matter of interest, but cannot be pursued here.

Music, taste and smell

These too are meaning carrying systems, if not languages. They involve much of the social construction of our language systems and the information glue which holds society together. It is not at the moment realistic to talk of integrating them into an information system, but we need to be aware that something might be looming here.

Symbolic logic

There is a special type of artificial language (if we're going to call the others natural) which has particular significance for the designer of information systems, the language of symbolic logic. From number theory and logic comes the ideas of Boole and the simple combination of and, or and not[21]. The capacity of the computer to represent on and off, true and false, 0 and 1 has provided us with a tool for representing the world in models so captivating that there is a temptation to mistake them for reality.[22] It has the further seduction that because it is based in a set of primitives which are defined as axiomatic, each set is built on the previous, and there is less need for a complex of protocols. The capacity to make the logic language and community independent, to represent the only truth, and to make the proof mechanism of formal logic the only legitimate proof system has proven seductive.

Analogy and metaphor, myth and fable

There are of course other logic's. In some societies they are very important. Children come to learn their world through them, not through playing with a computer. They are not easily reducible to 0 and 1, or true and false, or on and off. However the people understand their world through them and occasionally will die to defend them. Christianity, Mohammedanism, equality, Robert the Bruce, the family, apple pie

Classification

In turn there are a set of social processes which seem to be almost as "hard wired" as the subject, verb and object - group things together and exclude them - the process of classification. How this is learned and how it copes with change, of emergent properties, of hierarchy, of order takes information beyond the level we've been discussing so far into the full complexity of the social organisation of information.

Some classifications base themselves on the process of socialisation - we are part of a group because we see the world in this way. Some are the result of complex sets of protocols which are built up through complex social organisation. Some classification is based on a set of properties of the whole and the part, a leg is part of me or the finance department is part of the Polytechnic. Some base themselves on a relation of hierarchy, a corporal is subordinate to a general, though not part of him. Both are part of an army. The Finance Department might be subordinate to the Director (though certainly not to any of the teaching staff) though all are part of the Polytechnic. Flies are subordinate to the genus *whatever, though a fly is neither a part of a* nor must it obey the orders of a *. (*check whether I think there are any other types of classification)

The classification which most people will have experienced is possibly the Dewey Decimal Classification, used in many libraries. It shows the minimum which is required for a classification to exist as part of a formal information system, and the complications which are as a result of so doing.

There is a set of tables, a schedule and an index (now in their 20th incarnation) and a notation. A committee has to meet, tables and schedules have to be revised and updated. The system is used throughout the world, in some very big libraries. Reclassifying vast collections is unrealistic so some libraries use four editions at the same time. The fundamental tables give 10% to philosophy, 10% to religion, 10% to the natural sciences and 10% to the applied sciences. All rather inefficient.

Relationship and objects

There is one further social property of information which needs a comment – the property of relationship. Any particular datum may, by linking with another datum, become an object more complex that either of the components. The building of this link is a social process forming a relationship. These relations are not inherent in either the graphical representation of the object or in their names.

They might be logical, spatial, comparative, attiudinal, hierarchical, kinship, without implying particular influences, stimuli, responses, actions or reactions[23] , that is they might exist in the real world, or they might be representations for the convenience of analysis.

There is a danger here - that of spaghetti chaos. Everything, ar some level or other, is related to everything else.

Programmes as well as data are information

Our social construction of information has to become more complex when we allow that ways of doing things which are part of the known competence of every participant in a group can be coded in some form. These blocks of code can then be built together to form a set of instructions by which a group of people understand an event. Giving a dinner party, going to the dentist, keeping a set of accounts, raising a loan, buying a house, getting a job, managing a project, are all acts with sets of distinct and legitimate processes which are understood by the group. Transgressing the rules of convention, (such as going to an job interview naked, or even to all but the most special dinner party) would lead to failure of the project.

All these programmes can be processed either serially or in parallel. Making breakfast in the most efficient way, getting out of bed, these are the projects of training children in PERT and GANTT and CPA. It is because we know these conventions and the conventions of the programming languages within which they are conducted that we can move from the language as grammar and lexicon to the language of practice.

Every programme is based on objects and relations, some of those objects contain algorithms but every programme contains ideas about the world.

Models

As the complexity of the layers of objects we are building increases, we move to considering sets of representations of relationships in models.

Systems

Sets of models which might have a boundary drawn around them, such that the set of interrelating parts, of discrete components have mutual dependency we call a system.[24]

There is clearly a problem here: who is saying that parts are discrete, who that they have an interdependency?. Some, such as parts of the body, planets in the solar system, have an almost inarguable definition, in that the relation of part and whole is ontological, but * (get classification from Checkland)

Proof

The highest level we can move to in considering information is the question of proof. This is the proof problem stage 1: is this information right? To this we'll return in the chapter on design. Information cannot in and of itself be right or wrong. It was right for Aristotle to call the sun a planet when all he had in his classification system was planets and comets, for a sun appears to behave more like a planet than like a comet. It is not right now to call the sun a planet because we have a classification of stars, planets and satellites.[25]

Knowledge

It is at this point that I want to introduce knowledge, the third part of the trilogy. This is the link between the personal information processor and the social information.[26] Knowledge is the noun form of to know, one of the real portmanteau words. The earliest textual occurrence in English is in Beowulf, which would boggle my word processor. In the thirteenth century the lexical representation is still so unrecognisable to the modern machine that it would be unrealistic to reproduce it here. The first "meaning" given though is "to perceive as identical with one perceived before".[27], to admit the claims or authority of, mental comprehension of truths or principles.

Knowledge as personal, knowledge as social; this is the domain of the epistemological. It presents us with the next problem of proof 2 - the relationship between the subjective and the objective. From my living in the world I build up a world which I test continually and which I change according to complex rules which are themselves part of the building of the world. The world outside me is the object, from which I form myself, the subject. To the extent that I share a world with others, those data items which I appear to share unequivocably with others, those items which together form the intersubjectivity of the group, we say we know. Until something happens which changes that knowledge, in which case either the group changes the world, or I leave the group. (put in Kuhn's thing on Aristotle's planets? - no done it above - is it right?)

It is another major misfortune that just as Shannon squatted the word information to mean *, others have squatted knowledge to mean what they want it to mean. "We (note the royalty!) often talk of list- and- pointer data structures in an AI database as knowledge per se, when we really mean that they represent facts or rules when used by a certain programme to behave in a knowledgeable (would you believe it!) way".[28]

For we have to assert that knowledge has a bigger baggage to carry around. But between information and knowledge lies the realm of ideas. What information we choose to take to be real and what we dispose of depends on the world of ideas which we have fitted together, each of us alone and for ourselves, yet in the groups which we form in order to exist as political animals. People make history, but in conditions not of their own making[29].

This knowledge is coded in sets of complex programmes or codes which run through all discourses. There is the scientific code (in which it is illegitimate to use the first person pronoun), in which experimentation must be free of the subjective interpenetration of the experimenter, in which analysis leads to reductionism until you know everything about (almost) nothing. But it involves research grants and laboratories and security of tenure. There are rhetorical codes, where what is said in a lecture is different from a political meeting or a chat in the bar. There is a chronological code, in which a temporal logic applies to all discourse and a spatial code which we in England understand very well. There is a historic- cultural code - the strongest part of which seems to be the "we" in "Britain" and transgressing which leads to the erection of a barrier so high that it breaks the communication between teacher and student (another code of rhetoric!).[30]

Each of these codes, or disciplines in turn have social institutions by which they maintain their laws and initiate their neophytes - professional associations, royal charters, banks, retailers, manufacturers, trade unions, political parties. These institutions build into larger and more substantial social formations until we reach again the major structural formations of what we call a society.

So we return to Smith and the power of symbol in communication and the power of a medium of communication. When he talks of the power of information he means the power of the owners of the press, of radio and television, to transmit a view of the world. He means how that power is to be regulated and maintained. He does not deal with, though others have, the power of the process of education and how it constructs a social world[31 ]. Interestingly he does not deal with information in the senses I have dealt with them here at all. In fact there appears to be a tendency not to see information as political or social (the result of the influence of the funding organisations on most of the research involved?) though information technology might be. (See section n of Chapter 1.) In work I did in 1975 in searches on the ERIC database, I was interested to note that although education or mass media and politics (may I use Boolean shorthand here?) had a substantial literature, libraries and politics had almost none at all, and libraries then were was most information still concerned

One book has dealt specifically with information and power.[32] Follow this up when I get the copy back*

information has owner - here, 3 or 8?

not what things mean but how they mean

Know before you know it that you'll need to know it

transaction processing, decision support - better here or in 4?

Information technology

So far we haven't had much to say about what computers and telephones have done to information. If the first information revolution was the development of the ability to speak in something approaching a 'language', the second was the coding of sound into graphic representations. The third was movable type which enabled the same message to exist in many different places at the same time. The fourth is the links of computing and telecommunications which has yet again changed the spatial and temporal relationship of human communication. The same piece of information can now not only be in different places at the same time, but the change which one recipient makes to the message may be communicated to the author, the other recipients, and new ones, at the same time.

More significant than even that is the consequence of digitisation. Precisely because the code can be transferred from one application, data which was generated for one purpose may be used for another. The change this effects in the costs of performing any operation we'll return to in the next chapter.

We have so far dealt with Information in the form of its social organisation. One problem remains - the sources of information - the physical world, plus the problem of information from the social world having the objectivity of information from the physical world.

internal and external to organisation lead to 3

information as social glue

design as three legged stool - in 4

design as idea first do we want to do it? does it have a sensible relationship with its site? its neighbourhood, its environment? is it beautiful? practical will it work will it make us proud will we feel at home in it does it have warmth, nobility, a human scale, a sense o fits time and place? all from p 117 in 4.

Rider Haggard - eclipse

definitions, protocols and primitives

ownership - here or in 3?

formal and informal information

internal and external information - in 3?

changes in markets require changes in mental models - in 8?

ideas and data chapter 8 of Roszak in 8

information as signal for action - trigger?

architecture, town planing model here or 3 or 4?

Timeliness, accuracy and relevance Executive information systems. Business intelligence p26

The careful reader will have noticed that the quotations from the dictionary (and more that I haven't given, are circular - knowledge, information, data, are all self referential. It is historic convenience which led to them being used in particular ways at particular times.

Communication

The next of my little words is communication. This too dates from the 1380s and, you'll be surprised to note means, "the imparting, conveying or exchange of ideas, knowledge, information."

The sense of communication in the mass media I've referred to already. There remains two others. One is the face to face or person to person communication of conversation and letter writing and telecommunications, the other linking together of computers into networks, interfacing with other devices of information technology. Within any system communication is the process by which signals are passed from component to component. To these matters we will return in Chapters 3 and 4.

The uses of information

Clearly all human endeavour involves information, just as all social discourse involves communication.

Of course information isn't about being timely, relevant and accurate - it is about having it, and someone else not.

Chapter 2: the case study

If we now have a strategic framework for taking decisions within the School and the Polytechnic, can we say something about information within this framework?

It seems clear that there will be at least four distinctly different views of the institution:

  1. the students
  2. the teachers
  3. the administrators
  4. the managers

But if we are going to be able to talk about information, can we say anything in general at first, which might be generally true?

Stage 1

Let's start as close to the begining as we can get. A person applies to become a student. (For the sake of making this case study comprehensable I'm going to limit discussion to Local Authority funded full time students - there are hordes of other variations, but space and attnetion span preclude!) This involves filling in one or more forms from UCCA or PCAS (it is interesting that in all the performance of the removal of teh binary divide, amalgamating these two bodies seems more complex than any other bureaucratic shift!.) However, applications are for courses as well as to insitutions, and studetns will apply to many institutions and for many courses. In addition they have to receive Local Authority mandatory grants, and the have to produce examination resuolts or otyher conditions of entry which are appropriate for the ocurse they aree trying to get into. This form, in its electronic form, is sent by file transfer to the relevant institutons. All this happens in a time frame of about a year. Applications go to the Course Admissions Tutor who then has to process them.

There are about one million of these applications, being processed for 10,000 courses in 100 institutions by 10,000 admissions tutors!

The studetns have probably been sent prospectuses, produced at a cost of [[sterling]]n*get number each, describing everything in the institution, even if thye know they are applying for only one course.

From the perspective of the teacher, a number of applications are recieved periodically, as computer printout. What has happened to the electronic form? It is lodged in a machine to whihc the tutor doesn't have access. The Tutor might re-enter the data locally, alternatively it might stay in paper based form until an offer has been made. The computerisation of the process started in 19* (get), the electronic transfer of the data in 19* yet still threse islands of automation!

But from the Admission Tutor's perspective there is a futher deatail: how many places are you offering? The detail arises from the model which specifies the number of places available, on the basis of which so far the funding unit to the institution is based, as part of a negotiation round between th efundsing institution, the Deprtment of Education and Science, its implementing agency, the Higher Education Funding Council, the Employers Quango, the Committee of Vice Chancellors and Prinicpals (CVVP), and heavens alone what else. Yet although this ought to give a fixed numner, in practice there seems to be a benefit to pushing the number of admissions up. In one year we hiked ours by 25% and this seems to be regarded as a good thing.

The process of selection is clearly a complicated one. As students are made offers, and places are filled, the student needs to decide which offer to accept (or more than one for offers don't ned to be final, and the tutor needs to decide to make an offer (whihc is more fixed as it thereafter can't be withdrawn if all other conditions remain constant. Look as this from the perspective of bargaining power of buyer and supplier. If a course is undersubscribed, then what position does the tutor take on admission standards? If the tutor admits to a lowering standard then what are the later implications in results as quality monitoring devices and the amount of teaching which has to be put in to achive a change in result? In comparison which partds scheduling in a car factory we clearly have an interesting set of problems emerging. But I don't want to get into all of this yet, I'll leave most of this for later chapters. All I want to point to at this stage is that we have identified students, who have names and addresses and qualifications and course applications, and tutors who have or have not machines, and targets and notions of quality. Enough so far?

From the perspective of the administrators, let us presume that at this stage that is all they do - process data where for reasons as yet not clear, the machines can't do it, but from the perspective of the teacher, they are probably going to attempt to make the process as complicated as possible, in order to defend and increase their influence.

From the point of view of the managers, issues are more complicating, and I touch for the first time on a methoological issue which I can't deal with in detail here: what do we need to know to be able to say about one in particular or the system in general, how can we validate the statement - in other words in what sense is the statement true, and how should we proceed to find out?

It seems that at some time, let us say around 1988, there was taken a decision to push studetn numbers up, and break any model, and that the government department said it would make a specific unit of funding available for increased numbers. But how do we prove this thesis?

Stage 2

People have now become students. They are registered on courses. There is a file (computer based) somewhere in central administration saying something about them. There is a file (computer based?) in a Local Authority, saying that they have addresses, that they are being paid grants, and according to results will continue to be paid grants for a period, which means a termly transfer of funds both to the students and to the educating institution. These units of finance are set (presumably by the Department of Education, in negotiation with the Treasury, and somewhere there is presumably a settlement system impacting on the national economy and the gross domestic product to the tune of [[sterling]]10bn a year?

But back in the department there will also be a file (almost certainly paper based in a metal four drawer filing cabinet) and probably another computer based one for name, and as the years patter by, for marks and examination results, perhaps even another for industrial placement. In turn there will be class lists, attendacne registers perhaps? And a Studentds' Union membership, a computer centre registration, a library membership, something in accomodation, perhaps something for Welfare? And on and on and on.

With a bit of luck the student has been given a paper based booklet outlining the course, a syllabus, a reading list and a timetable, but for this to have happened by the begining of the teaching year will be rare..

From my earlier remarks on strategy in chapter one, does this sound like the use of information for competitive advantage? If not why not?

Stage 3

Meanwhile back to the admissions tutor, who now has a class list for the Head of Year, with the number of students, their names, the Head of Year for the Course Director and the Course Director for the Head of School. Somewhere during this process something ele has been going on. The teaching schedule for every teacher, the rooming requirements (ha rooming - I thought you'd never get there!) for each class, all together produce the timetable. Another complicated procedure. Rooming has to handle the requirements of all the departments, if a centralised resource system is used. If costs are attributable, then what is the impact on student recruitment of rooming availability, and if student numbers exceed model, what is the impact on rooming? At the moment let us safely presume that there is no computerisation of any of this process, not even electronic mail.

But changes in student numbers also mean changes in workload for individual teachers. And changes in balance of course provision mean changes in resources on which they may draw. How then to work out the equitable staff work loading, if that is in fact what is desirable? What is the dowery which every student carries? Is this available at the level of the teaching department, or are many other costs carved out first, so the Full time equivalent and the Staff - student ratio have by now long since become notional? Are the teachers isolated individuals, or do they have a collective consciousness, or even a collective organisation as a result of structural negotiations involving the formalisms of trade unions and contracts.

So teachers have timetables and class lists, before the teaching year begins. This has sop far been a prodigious exercise, and almost entirely human information processing, with a limited about of electornic list keeping, forty years after the invention of the universal calculating engine!

Stage 4

The year has started, classes are being conducted. Lecturers give lectures, students are spending grants, rooms are being occupied. Yet does anyone know never mind *get name the price of anything never mind its value. How much extra time is it worth giving a weaker student help, how much time on pastoral work, if a lecture involves linking together a computer, a television set, all students sitting at terminals, postgradute demonstrators, reading an essay, writing a handout, printing out a class set of handouts, marking an examination, having a tutorial, students preparing seminars?

So there seem some gaps in our information flows which mean that our capacity to take decisions is limited.

Stage 5

But there is another world entirely of information. Once upon a time it would have been mainly books and journals. It would have been a physical space called a library with an author catalogue and a subject catalogue and an issuing system. But now there are worlds of electronic networks linking millions of computers, access to a part of which gives access to much. There are gigabytes of data available on optical disks which might be in your room, or in a room down the corridor, ore anywhere on the network. Most of it will be words, but much will be moving images, maps, all sorts of rich treasures such as Prospero would have marvelled at. All you need is a terminal emulation package and a modem, or access to an X.25 gateway, but preferably an XOpen terminal and 64Kb comms (but wait for chapter 4 for all these technical sweeties).

Stage 6

Elsewhere in the place, invisible to the students, and probably to the teachers as well, the place is being cleaned, heated, secured, fire alarmed, supplied with A4 85 gms white long grained paper, staff contracts are being negotiated, adverts placed, lawns mowed, telephone directories produced and updated, all this around the hub of a financial information system in which n*do estimate or get number transactions are procesed every year amouting to a budget of [[sterling]]50 million.

Stage 7

Elswhere, over the whole of the year, and possibly involving almost everyone, or no one, will be a planning and monitoring cycle, in which the targets for the last year, the financial model, the targets for the next year, the spend, aggregations and disaggregations, are all mulled over, the consequences of the mulling being redundancy, increased workload, no replcement bulbs for the over head projector, another hundred students, all the little comings and goings of life, while in yet another palce, the numbers at higher levels yet of aggregation, sum up into the rate of inflation, the balance of trade, interest rates, wage offers, disputes, settlements.

Stage 8

There is another layer of information entirely - the work of the academic staff as researchers rather than as teachers. It is the contribution to original research and the caopacity to syntheise developments in other parts of the world, to draw on the experience of what is happening in industry, which are turned into the lectures and teaching material on which the students' learning is based.

This has two important information factors. The first is the resources which are required in order that this research may be undertaken, the second is the proportion of workload carried by teachers which should be allocated to research and personal development.

The annual workload of a teaher then divides into teaching, research, pastoral, development, administration. BUt what gets done, depends on a local level of management for which the performance indicators and the units of measurement are missing.

Endnote

Almost all the information here is numbers, some of it words, little of it appears coherently in computerised databases yet much of it does. Few of the primitives, protocols or models appear open and negotiable.

Notes

[1]Shannon, Claude. Theory of information (get citation*) Shannon, C.E.(1948) A Mathematical Theory of Communication. Bell System Technical Journal, 27, pp. 379 - 423 and 623 - 656

[2]Smith, Anthony. Politics of information (get citation)

[3]I'm not claiming originality for noting this point. Theodore Roszak develops it in The cult of information: the folklore of computers and the true art of thinking. Cambridge: Lutterworth, 1986 as did Fritz Machlup The study of information. New York: John Wiley, 1983.

[4]this section is drawn entirely, but selectively, from The Oxford English Dictionary.

[5]For example "Information is data that have been put into a meaningful and useful context and communicated to a recipient who uses it (sic) to make decisions." Burch, John and Grudnitski, Gary Information systems: theory and practice. 5th ed. New York: John Wiley, 1989. p3.

[6]Galliers, Robert. Information analysis: selected readings. Wokingham: Addison Wesley, 1987. p4.

[7]Nichols, G.E. On the nature of management information. Management Accounting v*n* April 1969, p11. 9April) 9 - 13, 15

[8]Drucker, Peter. The new organisations. Harvard Business Review. v*n* Jan- Feb 1988. p*

[9]I don't want to touch here on whether the data sets are acquired by digitisation, remotely sensed data, ariel photography, survey or whatever - these technical questions are dealt with at length elsewhere. I'm using the term digitisation in the sense of turning into 0 and 1 finally.

[10]The nature of the scientific effort is not what we wish to cover further - simply to point to the physical world as a source of evidence for our information systems. Michael Young's Knowledge and control Milton Keynes: Open University, (1971)provides a source of material for anyone wanting to follow this further.

[11]find suitable suggested readings

[12]get a couple of citations from Chris Hutchinson

[13]some suitable references on these thorny matters

[14]For an interesting history of dictionaries and encyclopaedias see Roberts *get citation

[15]Barber, C.L. The story of language. London: Pan, 1964. is a very readable simple introduction. To deal with these matters in more complexity see *get citations

[16]is this so - check it *

[17]see his S/Z (*get citation)

[18]Ronald Stamper elaborates on the directions which research in this area could take for information systems designers in Semantics in Critical issues in information systems research edited by Boland, R.J. and Hirschheim, R.A. Chichester: John Wiley, 1987 p43-78.

[19]For a general introduction to semiology I'd suggest * hoho!

[20]can we have a set of pictures? *

[21]get citations from Stuart*

[22]The extent of the complexity of the world which can develop is shown in Markland, Robert E. and Sweigart, James R. Quantitative methods: applications to managerial decision making. New York, John Wiley, 1987.

[23] these I've taken from Machlup p43

[24]Checkland for introduction to the world of systems people?

[25]I'm indebted to Thomas Kuhn at the Shearman lectures, London 1988 for expanding this argument.

[26] is this where the social construction of reality, Berger and Luckman, and the Knowledge and Control stuff comes in?

[27]there is presumably something decent on the development of the term Petros?

[28]Barr, A and Feigenbaum, E. The handbook of artificial intelligence. 2Vols. Los Altos: William Kaufman, 1981 p143, quoted in Machlup p 31

[29]The substantial debate on the relation between the structures and agents of history has been dealt with recently in Alex Callinicos The making of history. Cambridge: Polity Press, 1987

[30]For the details of these I am to blame, but rather than thinking of categories of code for myself I have borrowed those of Barthes in Textual analysis of a tale of Poe. from Blonsky, Marshall. On signs. Oxford; Basil Blackwell, 1985.

[31]Paulo Friere is probably the clearest in this argument. See his Pedagogy of the oppressed. It is interesting that it is the work of Piaget which appears t have emerged as the dominant pedagogic paradigm for those involved with computer assisted learning. I think that will change.

[32]O'Brien, Rita Cruise. get title Hodder and Stoughton, 19n Information, economics and power. London: Hodder & Stoughton, 1983

[33]

Copyright1999–2007: John Lindsay Impressum—Imprint