Tagged: algorithmic Toggle Comment Threads | Keyboard Shortcuts

  • Profile photo of feedwordpress

    feedwordpress 11:52:30 on 2018/01/02 Permalink |
    Tags: , , algorithmic, , , , , , , , , , , , , ,   

    The Cooperative Principle in Conversation versus the Prejudice in Silence 

    In the following, I understand the Internet as a massive text connected by many participants conversing with one another. Parts of the text are in close connection, and the discussion can be viewed as heated insofar as the sub-texts reference each other in some way (links are merely one example of such cross-references). Other parts of the text are fairly isolated, hardly discussed, rarely (if ever) referenced. I want to argue that the former parts are “well formed” in the sense that they follow Grice (1975)’s cooperative principle, and that the latter seem to evidence a sort of prejudice (performed by the disengaged participants) — which I hope to be able to elucidate more clearly.

    Before I embark on this little adventure, let me ask you to consider two somewhat complementary attitudes people commonly choose between when they are confronted with conversational situations. These are usually referred to as “feelings” — and in order to simplify, I will portray them as if they were simply logically diametrically opposed … whereas I guess most situations involve a wide variety of factors each varying in shades of gray rather than simple binary black versus white, one versus zero. Let’s just call them trust and distrust, and perhaps we can ascribe to elements of any situation as trustworthy versus distrustworthy.

    Next, let me introduce another scale — ranging from uncertainty (self-doubt) to certainty (self-confidence).

    Together, these two factors of prejudice (in other words: preliminary evaluations of other-trustworthiness and self-confidence) crucially impact our judgment of whether or not to engage in conversations, discussions, to voice our own opinions, whether online or offline.

    As we probably all know, the world is not as simple as a reduction to two factors governing the course of all conversations. For example: How does it happen that a person comes to fall on this end or that end of either scale? No doubt a person’s identity is influenced by a wide variety of group affiliations and/or social mores, norms and similar contextual cues which push and pull them into some sort of category, whether left or right, wrong or fixed, up or down, in or out with mainstream groupings. One of the most detailed investigations of the vast complexity and multiplicity woven into the social fabric is the seminal work by Berger and Luckmann titled “The Social Construction of Reality”.

    While I would probably be the first to admit the above approach is a huge oversimplification of something as complex as all of human interactions on a global scale, I do feel the time is ripe for us to admit that the way we have approached the issue thus far has been so plagued with falsehoods and downright failures, that we cannot afford ourselves to continue down this path. In an extreme “doomsday” scenario, we might face nuclear war, runaway global warming, etc. all hidden behind “fake news” propaganda spread by robots gone amok. In other words, continuing this way could be tantamount to mass suicide, annihilation of the human race, and perhaps even all life on the planet. Following Pascal, rather than asking ourselves whether there is a meaning to life, I also venture to ask whether we can afford to deny life has any meaning whatsoever — lest we be wrong.

    If I am so sure that failing to act could very well lead to total annihilation, then what do I propose is required to save ourselves from our own demise?

    First and foremost, I propose we give up the fantasy of a simplistic true-or-false type binary logic that usually leads to the development of “Weapons of Math Destruction”. That, in my humble opinion, would be a good first step.

    What ought to follow next might be a realization that there are infinite directions any discussion might lead (rather than a simplistic “pro” vs. “contra”). I could echo Wittgenstein’s insight that the limits of directions are the limits of our language — and in this age of devotion to ones and zeros, we can perhaps find some solace in the notion of a vocabulary of more than just two cases.

    Once we have tested the waters and begun to move forewards toward the vast horizons available to us, we may begin to understand the vast multi-dimensionality of reality — for example including happy events, sad events, dull events, exciting events and many many more possibilities. Some phenomena may be closely linked, other factors may be mutually orthogonal in a wide variety of different ways. Most will probably be neither diametrically opposed nor completely aligned — the interconnections will usually be interwoven in varying degrees, and the resulting complexity will be difficult to grasp simply. Slowly but surely we will again become familiar with the notion of “subject expertise”, which in our current era of brute force machinistic algorithms has become so direly neglected.

    If all goes well, we might be able to start wondering again, to experience amazement, to become dazzled with the precious secrets of life and living, to cherish the mysterious and puzzling evidences of fleeting existence, and so on.

    Tags:
    propaganda, rational media,
    language, natural language,
    algorithm, algorithms, algorithmic,
    big data, data, research, science,
    quantitative, qualitative,
    AI, artificial intelligence,

    Conversation

     
  • Profile photo of nmw

    nmw 15:27:59 on 2016/07/12 Permalink
    Tags: academia, academic, , , algorithmic, , , , , bandwagon, bandwagon effect, , , , , , , , , , compute, , corrupt, corrupted, corruption, , , , , , , group think, groupthink, , , , , , , , , , , , , majority, , , , , populism, populist, , , rason, , , , , , , , , , systemic, , , trusted, , , universities, , valid, validity, vote, votes, voting, ,   

    The Spectre of Populism 

    There is a spectre haunting the Web: That spectre is populism.

    Let me backtrack a moment. This piece is a part of an ongoing series of posts about „rational media“ – a concept that is still not completely hard and fast. I have a hunch that the notion of „trust“ is going to play a central role… and trust itself is also an extremely complex issue. In many developed societies, trust is at least in part based on socially sanctioned institutions (cf. e.g. „The Social Construction of Reality“) – for example: public education, institutions for higher education, academia, etc. Such institutions permeate all of society – be it a traffic sign at the side of a road, or a crucifix as a central focal element on the alter in a church, or even the shoes people buy and walk around with on a daily basis.

    The Web has significantly affected the role many such institutions play in our daily lives. For example: one single web site (i.e. the information resources available at a web location) may be more trusted today than an encyclopedia produced by thousands of writers ever were – whether centuries ago, decades ago, or even still just a few years past.

    Similarly, another web site may very well be trusted by a majority of the population to answer any and all questions whatsoever – whether of encyclopedic nature or not. Perhaps such a web site might use algorithms – basically formulas – to arrive at a score for the „information value“ of a particular web page (the HTML encoded at one sub-location of a particular web site). A large part of this formula might involve a kind of „voting“ performed anonymously – each vote might be no more than a scratch mark presumed to indicate a sign of approval (an „approval rating“) given from disparate, unknown sources. Perhaps a company might develop more advanced methods in order to help guage whether the vote is reliable or whether it is suspect (for example: one such method is commonly referred to as a „nofollow tag“ – a marker indicating that the vote should not be trusted).

    What many such algorithms have in common is that on a very basic level, they usually rely quite heavily on some sort of voting mechanism. This means they are fundamentally oriented towards populism – the most popular opinion is usually viewed as the most valid point of view. This approach is very much at odds with logic, the scientific method and other methods that have traditionally (for several centuries, at least) be used in academic institutions and similar „research“ settings. At their core, such populist algorithms are not „computational“ – since they rely not on any kind of technological solution to questions, but rather scan and tally up the views of a large number of human (and/or perhaps robotic) „users“. While such populist approaches are heralded as technologically advanced, they are actually – on a fundamental level – very simplistic. While I might employ such methods to decide which color of sugar-coated chocolate to eat, I doubt very much that I, personally, would rely on such methods to make more important – for example: „medical“ – decisions (such as whether or not to undergo surgery). I, personally, would not rely on such populist methods much more than I would rely on chance. As an example of the kind of errors that might arise from employing such populist methods, consider the rather simple and straightforward case that some of the people voting could in fact be color-blind.

    Yet that is just the beginning. Many more problems lurk under the surface, beyond the grasp of merely superficial thinkers. Take, for example, the so-called „bandwagon effect“ – namely, that many people are prone to fall into a sort of „follow the leader“ kind of „groupthink“. Similarly, it is quite plausible that such bandwagon effects could even influence not only people’s answers, but even also the kinds of questions they feel comfortable asking (see also my previous post). On a more advanced level, complex systems may be also be influenced by the elements they comprise. For example: While originally citation indexes were designed with the assumption that such citation data ought to be reliable, over the years it was demonstrated that such citations are indeed very prone to be corrupted by a wide variety of corruption errors and that citation analysis is indeed not at all a reliable method. While citation data may have been somewhat reliable originally, it became clear that eventually citation fraud corrupted the system.

     
  • Profile photo of nmw

    nmw 17:48:26 on 2016/05/31 Permalink
    Tags: , algorithmic, , , Bible, , , , , file, file name, file names, filename, filenames, files, graphical user interface, GUI, hardware, HCI, , human-computer interaction, , , , , , , , text, , ,   

    The Ubiquity of the Text Box (excursus) 

    One of my favorite authors in the field of „search“ is John Battelle. Although he was not trained in the field of information science or information retrieval, his experience in the fields of journalism and publishing at the cusp of the so-called „information revolution“ apparently led him to learn many things sort of by osmosis.

    One of my favorite ideas of his is the way he talks about human-computer interaction. Initially, this was almost exclusively text-based. Then, he notes, with the advent of „graphical user interfaces“ (GUIs), computers became more and more instruments with which humans, would point at stuff. He has presented this idea quite often, I don’t even know which presentation I should refer, link or point to – which one I should index.

    In the early days of search, the book was ubiquitous. Indeed, several hundred years ago it almost seems as though each and every question could be answered with one single codex – and this codex was called „Bible“ (which means, essentially, „the books“). We have come a long way, baby. Today, we might say that online, the text box is king“ (Tom Paine, eat your heart out! 😉 ).

    Although computer manufacturers desparately try to limit the choices consumers have once they have acquired their machines with loads of previously installed (and usually highly sponsored) software, it will not be very long before the typical consumer is confronted with a text box in order to interact with his or her mish-mash of hardware and software. Even without typing out any text whatsoever, whenever a human presses on a button to take a picture or clicks on an icon to record an audio or video, the associated files are given a text-string filename by the gizmo machinery. All of the code running on each and every machine is written out in plain text somewhere. When computers write their own Bible, it is quite probable that they would start off with something like „In the beginning was the text, and it was human.“

    If humans ever asked an „artificially intelligent“ computer a question like „what is love?“ the computer would probably be very hard-pressed not to respond „a four-letter word“.

     
  • Profile photo of nmw

    nmw 15:16:27 on 2016/03/04 Permalink
    Tags: , , algorithmic, , , , , content. Wordpress, , , functional, , , , intelligences, , , , , procedural, procedure, procedures, refer, reference, relate, , , , , , technologies, , , , , , ,   

    Limitations in the WordPress Notifications algorithm 

    Ted and Brandon’s most recent episode of the „Concerning AI“ podcast is a very rewarding listen… – mainly because of their thinking with respect to compassion towards (or against) algorithms.

    Having compassion towards or against an algorithm seems like a very strange concept, and I feel I very much agree with Ted and Brandon’s thinking during the episode, but I also want to use the suggestion as a „what if“ sort of springboard.

    Ted and Brandon provided several examples algorithms (and/or tools). Perhaps the quintessential example is the hammer (for pounding nails). Another example they provided was the so-called „Google“ algorithm (presumably counting the links that point to any particular internet address, in order to „load the value“ of that address. Another algorithm they mentioned was an „alpha“ (sp?) Go algorithm. One they didn’t mention was the Facebook Group algorithm, which they employ for the purposes of facilitating discussions related to the podcast. Another algorithm (or perhaps „procedural code“ might be a more appropriate term) they didn’t mention is the WordPress Notifications procedure (or function?) … which attempts to notify the management of a site running WordPress when content on the site is mentioned. I am not exactly sure how it works – but I think both sites might have to be running WordPress (or at least software that is compatible with the notification procedure / function)… thereby enabling one site to send the other site some message indicating that the latter site was referenced by the first site. In traditional publishing, such references were called „footnotes“, and there was indeed also a tool in the paper era that notified authors when something they wrote had been cited (these were referred to „citation indexes“).

    I am belaboring this one algorithm (or procedure or function or whatever sort of code it might be) primarily because I think it could be coded better. As far as I know, whenever I mention the site concerning.ai in general, the concerning.ai site is not notified. The only way the concerning.ai site can be notified by my mentioning it is if I mention a particular piece of content – for example: Episode Number 14. I think it would be nice if the site would be notified even if I only refer to the site in general.

    Ted and Brandon discuss that they don’t feel as if they can empathize with any of the algorithms they mention – but I feel the probably do. If they want to play Go, then they will probably be more likely to „hang out“ with a Go algorithm. If they want to meet people, they might be more likely to „hang out“ with a Facebook algorithm. If they want to watch Youtube videos, they might search for such information directly on Youtube, or perhaps the might utilize the Google search algorithm (in particular because Google and Youtube are apparently very closely related).

    I have a hunch that the best way to think about this is via the concept of relationships. When my aim is to pound nails, then I will probably develop a close relationship with a hammer. If my aim is to play Go, then I could develop a relationship with algorithms devoted to Go (perhaps alpha-go.com or maybe play-go.net etc.), or perhaps I could input strings into some other algorithm (e.g. Google, Facebook, Youtube, etc.) and use whatever output I get in order to reach my goal. This might also work for the goal „have a conversation“. Indeed: many written texts are in a way conversations, and we often develop relationships with codices that are no longer limited to the life spans of their authors, etc. I don’t even know who invented hammers. I mainly simply think of them as „hammer“.

    Please note that I have tried to make this post very brief. Lawrwnce Lessig has argued about the code in so-called “artificial languages” being like laws. I could equally well argue that the code in laws codified in so-called “natural language” are actually code. For more on this, please consider also reading “How to Constrain the Freedom to Choose the Best of all Possible Worlds During an Era of Uninterrupted Progress“.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Skip to toolbar