Intellectual property, that is to say the private ownership of words and ideas: it doesn’t sound like the kind of relationship with knowledge that a place of higher learning like a university ought to foster, does it? Besides, how do you even steal words, or ideas? They are hardly gone after you have snatched them. How about ‘lying about whose work it is,’ then? Perhaps that’s the crux of the matter. Producing knowledge requires an effort, which is usually defined as ‘work’. If anybody could simply claim the credit for the work of anybody else then the knowledge industry – which is regulated by market relations that monetise this credit in various ways – would cease to function. But surely the social good lies in the knowledge itself, not in its attribution, and besides the example of the anonymous authors of so much oral poetry, traditional music and contemporary street art, it is quite possible to imagine a utopian pinko knowledge industry where ideas circulate freely, thus facilitating and accelerating the production of more knowledge.

Because in truth, how can you locate the point of origin of an idea or a certain sequence of words except in the culture itself? Roland Barthes, circa 1968, in ‘The Death of the Author’:

The text is a tissue of quotations drawn from the innumerable centres of culture. […] [T]he writer can only imitate a gesture that is always anterior, never original.

The following year, Michel Foucault began his essay ‘What is an author?’ by posing a question originally formulated by Samuel Beckett: ‘What does it matter who is speaking?’ to which Barthes had replied in advance:

writing is the destruction of every voice, of every point of origin. Writing is that neutral, composite, oblique space where our subject slips away, the negative where all identity is lost, starting with the very identity of the body writing.

Now I don’t want to dumb-down these two essays and their peculiar conversation to a couple of easy-to-digest snippets, nor ignore the specific historical and cultural conditions in which they were produced, at a time when what Foucault dubbed ‘the-man-and-his-work-criticism’ held full sway. But one could legitimately ask: if an understanding of intertextuality and the ideas of the death of the author and the author-function have been around for so long, why haven’t they changed the way the publishing industry operates, or forced a rethinking of what constitutes plagiarism in publishing and academia? Is it simply a case of those critics and those ideas having been cast aside?

I would say yes, and no. On the one hand, yes, the publishing industry has changed its ways not an iota, nor did Barthes or Foucault themselves to my knowledge ever renounce their name on the cover or the customary protections and moral rights afforded to a published author. Ditto Ihimaera. Hell, even Bansky has claimed these, albeit ‘against his better judgment’. But I think more profoundly the idea that authorship and its integrity matter has proved equally as resilient. Pierre Menard himself tell us that we can’t quite dispense completely with it – even as he goes about turning it upside down – by showing how differently we would have to read Don Quixote if we knew it to have been written by a 20th century Frenchman as opposed to a 17th century Spaniard.

Of course, you say? Well, yes. But consider how electronic writing and the Internet were meant to change all this, further unsettling traditional ideas concerning just who it is who does the writing and possibly killing the author all over again by circulating near-infinite variations on a near-infinite number of texts without a discernible point of origin, or a shred of attribution. This remains a source of anxiety, but I would argue it really hasn’t happened yet. If anything, people who write on the Web have developed a whole new and highly sophisticated sensitivity towards issues of textual attribution and historicity. I’ve touched in the past by way of example upon the edit history of Wikipedia entries, which shows an attention to intricate philological issues on the part of a writing community that consists largely – and I mean this in the most non-derogatory way possible – of amateurs.

The credible bloggers are also very careful to acknowledge their sources, and the manner in which they do so is interesting, for the hyperlinks provided often point to the pages where each discovery took place. It’s only by means of further hyperjumps, following a Star-Trek-like wormhole of sorts, that one is likely to get to the source proper, the location where that particular text came to be, the ‘mothertext’ if you will. Or not, of course, there’s always the possibility that one or more of the pages might have expired by then, but that for once doesn’t matter: it’s in that pattern of connections, however provisional and unstable, that one can glimpse a new way of mapping the 3-dimensional space where authorship and readership come to coexist.

I want to steal this talk again, and to discuss what the author-function of a blogger, amongst others, might be. I suspect we’ll find it is highly plastic and I’ll go as far as to reserve a word to describe this, allthor, an extremely catchy and MBA-friendly term that perhaps some of you might help me fill – I have but vaguest of ideas at present, save for the fact that I think it would be an interesting question to explore.

But in the meantime, what of Ihimaera’s indiscretions? Would it even matter that he neglected to credit those sources, were it not for the legal framework within which the publishing industry operates, or the possibly antiquated notions of originality and individuality that we choose to entertain in this particular medium? I think that even under those conditions it does, it would. For crediting a source, the site where some particular words came together in the way that they did, means also preserving a trace of the text’s place within the culture that produced it, of its genealogy. But as in a genealogy, the presentation of the copied text is better viewed as the re-presentation of the original, a facsimile perpetuating a forgotten past to an unknowing or unwitting reader anew, perhaps guiding them closer to a history they may well otherwise have lost.

Consider a remote and fanciful future where Menard’s Quixote survived while Cervantes’ didn’t, and furthermore there was no knowledge that the earlier of the two books had even been written. This is the kind of loss – of metadata, of history, of memory – that you would be measuring every day.


Donald does have a few inheritances from his grandad… But they are not particularly meaningful.

Instead, Donald inherited something else, something that he would regard as much deeper and more meaningful than a mere set of objects… Donald inherited from his family his relationship to work, a relationship that he could and did constantly build on, and expand and make part of himself. His was a family that involved itself very deeply in what they did, and it didn’t matter much if this was formal paid labour or the informal work of domestic life. Whether as builders or in cooking, they threw themselves into the activity, and in large measure what they did was what they became.

For Donald, this is the key to his relationship with both his job and his home… He loves his work and sees it as a craft…

No-one would consider a buyer for a retail chain as a craftsman. Yet I have no hesitation in using this term to describe what Donald is.

Daniel Miller. Portrait Twelve, “Making a Living”, in The Comfort of Things. Polity Press. 2008

Well, despite the weather being crappy we made the most of our holiday and tried to get into the water every day. And we pretty quickly learned that the guide-book we bought at the airport was pretty much 50/50 on accuracy. Big chunks of the reef in Rarotonga are over-fished and pretty sparse on the wildlife front. (more…)

So here’s something, I’m most of the way through Here Comes Everybody by Clay Shirky, and I’ve almost had enough. It’s an interesting book, and I agree with many of is premises and arguments because I’ve thought them myself. For instance, his assertion that you shouldn’t fret about all the stuff being written on the net. It seems that because we’re used to stuff being directed at us we tend to assume that everything we read in social media is written for us. But this simply isn’t the case.

Most social media is written for a small group of people known to the author, although this group may vary from topic to topic, and the stuff is put out into the ether as a complement to day to day conversation. Or put another, slightly more theoretical way, our embodied selves are now extended and expanded into cyberspace. The SciFi writers are already all over that one though, with independent avatars working to process or accumulate additional embodied knowledge for us a common (some would say banal) feature of contemporary novels.

What has annoyed me about Shirky and other authors boosting the “Web2.0s” is the constant carping about how social media will fundamentally transform the way our society works. So, you, Shirky, and that guy who wrote Wikinomics, we get the picture!! Move on, please. Yes, social media has allowed the masses to free up their voices, and centralised organisation no longer carries the weight it did. And, we can now harness multiple points of thought to achieve what it used to take hierarchical organisation to achieve. But where to next?

After working with and reading about social media for a bit now, I’m in agreement that it is a revolution in social organisation and creation of information. In day to day terms that means we are able to access more and better information from across the globe, if we know how. And if we don’t know how then there are more and more people stepping into the market opportunity that is; filtering signal from all the noise. But does that mean that people are better at utilising the unprecedented amounts of information they have access to? 

Are we actually any wiser?

Awhile back I heard a geezer from Canterbury University speaking about IQ testing. Apparently average IQs in this day and age are much higher than the turn of the C20th. But, he argued, this is mostly because the kinds of intellect the tests are looking for is now far more prevalent, and primarily due to modern education. So rather than intelligence being higher, the kinds of thinking we’re teaching is well entrenched enough that more people score higher on the test to see if that education is entrenched. If you get what I mean.

So people aren’t actually smarter, they’re just better trained in the way the academy wants us to think.

This suggests to me that increased information won’t actually change people themselves. It will however recondition our society to know how to manage large volumes of, for want of a better word, crap. Something I toyed with a wee while back was the idea that ‘the path is wiser than the walker‘. In the context we’re talking about here, the shape of the interweb is influenced by the way that people act. Lots of people using social media leads to lots of noise of a particular sort, and there are signals for some contained therein. Social media in effect creates a series of “paths” followed by people, and which over time become “the place to get information”. Witness Wikipedia.

The revolution produced by social media really just means that we produce reliable information for each other, and don’t source this same consumable from corporations. Nothing new in that statement though.

Where this big circle of wondering leads me to is, how much are we creating the web, and how much is the web creating us? Because I’m inclined to think that our increasing dependency on the interweb to source and manage our information will begin to influence social thought itself in much the same way as education has shaped IQs. The production of noise becomes normal and expected, with the most valuable members of society becoming those who can filter for signal.

One of my mixed fortunes was being brought up by my mum. Solo motherhood is a hard row, and I had two younger brothers she had to keep an eye on as well. But, it meant that I was able to chose my own male role models. Naturally this included my uncles and my grandfather, but also included blokes off TV, out of books, and in bands.

It’s a strange thing trying to define yourself, but I guess it’s something we all do. It’s just that some of us have more clearly defined markers, aeh?

So, masculinity. What seems to be a common mistake is defining femininity and masculinity as roles, or in the doing. Consequently calls are made for men to be more masculine by not being afraid to do childcare, or perform domestic duties.

I’ve never really understood that though, because what you do and who you are two entirely separate things. Yes women were traditionally relegated to particular roles and activities, but I’m not certain that they actually defined femininity itself. Certainly these roles were used to restrict women, but I can’t see having men performing some of these tasks would or could change masculinity or femininity.

Put another way, men doing domestic chores doesn’t make them feminine, so why is it assumed that performing domestic chores makes women feminine? My answer would be that it does not. There is without doubt a strong relation between the “domestic space” and “femininity”, but it is only a relation, not a dependency. The real question is, “to what extent does domesticity contribute to femininity?”

And I’d have to assume that for some women the answer is, “not at all.”

Now, you can flip that question over and ask to what extent traditional male roles like ‘providing’ define masculinity. And again, for some men the answer is negative. It seems that the doing isn’t what masculinity is all about.

What my lack of predefined male role-model allowed me to realise is that masculinity is about the being. Men don’t do things to make themselves masculine, they just are. Masculinity is something you can learn and imitate, but the essence of being a man is not an activity, it just is. And it is also an individual essence, ineffable.

Perhaps Austin Powers is so funny because everyone recognised ‘the mojo’ for what it is!

Putting aside cheesy stereotypes, masculinity is an acquired essence that grows and/or changes as a man matures. Moreover, like many ineffable things it is better defined by what it is not. It is not independent of femininity for example, but is enhanced by it.

My own opinion is that freeing up masculinity from the doing is liberating for both genders. Because we can start to see it as a essence, or an attribute, it can vary and amend itself to its circumstances. Moreover, my masculinity doesn’t undermine or boost yours, we’re each able to define ourselves.

This probably needs teasing out, especially to prevent the introduction of dogmatic or stereotyped masculinity of the sort I mentioned in the last post on the concept (fundamentalism). Would like to hear from anyone about it.

So if you’re not familiar with the cynefin framework, it’s an idea under iteration by Dave Snowden and his colleagues. It’s a deceptively simple-looking framework that allows, as I understand it, a problem solver to more easily interpret two things: the current structural coherence of an organisation, and/or the mind-frame of a decision-maker.

To be more specific about the first thing, it doesn’t usually apply to an entire organisation. It can however be used as a frame to better interpret how aspects of an organisation are working, or not, as the case may be. So for example, a phone room is probably falls into the simple domain. It’s all “calls in, answers given”. Things can become complicated, and it’s then that a manager needs to make a decision about something, and everything returns to ‘simple’. But, elsewhere within that same organisation, things can be totally out of hand and could only be characterised as chaotic.

In relation to the second thing, someone could possibly see their occupation or vocation as simple, when in fact it’s complex, or at least complicated. They might say… be seeing everything a little too black and white.

Regardless of whether the frame of reference is internal or external to the subject individual, what all the cynefin domains have in common is the requirement that a leader or manager make a decision or set of decisions. Something we can assume is that the motivations for these decisions will frequently be constructed to ‘order’ their environment. We can assume this because people like ordered environments, especially in organisations, hence the noun, and even more especially in public sector departments (which have constituencies, or a worse title, ‘clients’, organisations who rely on predictability and reliability). Actions by decision-makers in the cynefin framework will most usually be undertaken to ensure improvement in order, even if ‘improvement’ actually means ‘establishment’.

What dawned on me while listening to Dave outline these ideas (which I may or may not have completely grasped), is that an illustration of the duration and effectiveness of decision-making was lacking.

There are at least two dimensions to this effectiveness. The first is the wisdom of the decisions made. Some decisions are just stupid, so their effect is (hopefully) short-lived. We’ve all worked for someone who issues an order that we immediately discover a work-around to avoid. The duration of the effect of the decision is therefore very low. The second dimension is the ‘depth’ of the effectiveness of the decision. Some decisions are so good (or bad), that they not only effect the immediate environ of the decision-maker (for instance their team), but also other units with the ‘ecology’ of an organisation. A positive example of this could be the conduct of a safe-fail experiment that creates a tool useful to an entire organisation.

So how to illustrate this? I think there are two main components; time, and duration/depth of effect.

Time always advances towards us from an unknowable future. We make decisions as time brings the need to do so forward. We can preempt some future events and decide a course of action or response beforehand (what I’ll wear to work tomorrow), but most decisions are made ‘on the fly’ (which of the foods presented to me will I chose?) Decisions in time establish order for a particular duration, i.e. what I chose to wear in the morning is what I wear all day. And sometimes tomorrow. I don’t need to make any more decisions for a given time.

In the simple domain of the cynefin framework time will present fairly predictable decisions, and at a fairly predictable pace, and this will change/accelerate as you move around the framework to the chaotic. No surprises there I hear you say.

Confronting the future, and the advancing ‘issues’, is active “decision-making horizon”. This is space in which decision-makers operate, and to which each of the cynefin domains are assigned. Everything within that horizon is well-covered by the practitioners’ network, but what happens afterwards is interesting. 

In an abstract sense, once a decision-maker creates order, all those bubbles floating around before the horizon are placed ‘just so’ beyond the active horizon on which the decision-maker operates. Once they’re beyond the horizon they represent ‘order’ and ‘decisions made’. But as time progresses that order disintegrates and another decision-making horizon is likely to be needed.

My thinking is that the durability of the order is dependent on the context in which the decisions were made, i.e., which cynefin domain, but also the wisdom of the decision itself. A good decision made in a chaotic time could well outlast a poor decision made in an simple environment.

Since attending a course a week or so back I’m now an accredited Cognitive Edge Practitioner, which is an honour I’d hang next to the other degrees. And I’m not taking teh mickey there. CE practice is a highly interesting methodology that a colleague and I are discussing the applicability of in relation to our research and evaluation work. If it lives up to it’s promise, it could present some very useful data for our workplace.

But we’re not here to discuss work. I’m cogitating an idea about an area that wasn’t highly discussed in the CE accreditation, and will try to tease out a little further before putting it up here, and/or outlining it on the Cognitive Edge site.

CE is a method that simplifies understanding how leaders can and do make decisions in organisations. But it doesn’t appear to conceptualise the “time-dimension” of that decision-making process. I think there is a distinction process followed by leaders in all five of the areas the CE framework creates. This process is common to all leaderships, and only varies in the voracity of its application and the duration of it’s effects.

I’ll do a little more thinking/sketching, and then run it out for comment.

Next Page »