Tagged: computation Toggle Comment Threads | Keyboard Shortcuts

  • Profile photo of nmw

    nmw 15:27:59 on 2016/07/12 Permalink
    Tags: academia, academic, , , , , , , , bandwagon, bandwagon effect, , , , , , , , computation, , compute, , corrupt, corrupted, corruption, , , , , , , group think, groupthink, , , , , , , , , , , , , majority, , , , , populism, populist, , , rason, , , , , , , , , , systemic, , , trusted, , , universities, , valid, validity, vote, votes, voting, ,   

    The Spectre of Populism 

    There is a spectre haunting the Web: That spectre is populism.

    Let me backtrack a moment. This piece is a part of an ongoing series of posts about „rational media“ – a concept that is still not completely hard and fast. I have a hunch that the notion of „trust“ is going to play a central role… and trust itself is also an extremely complex issue. In many developed societies, trust is at least in part based on socially sanctioned institutions (cf. e.g. „The Social Construction of Reality“) – for example: public education, institutions for higher education, academia, etc. Such institutions permeate all of society – be it a traffic sign at the side of a road, or a crucifix as a central focal element on the alter in a church, or even the shoes people buy and walk around with on a daily basis.

    The Web has significantly affected the role many such institutions play in our daily lives. For example: one single web site (i.e. the information resources available at a web location) may be more trusted today than an encyclopedia produced by thousands of writers ever were – whether centuries ago, decades ago, or even still just a few years past.

    Similarly, another web site may very well be trusted by a majority of the population to answer any and all questions whatsoever – whether of encyclopedic nature or not. Perhaps such a web site might use algorithms – basically formulas – to arrive at a score for the „information value“ of a particular web page (the HTML encoded at one sub-location of a particular web site). A large part of this formula might involve a kind of „voting“ performed anonymously – each vote might be no more than a scratch mark presumed to indicate a sign of approval (an „approval rating“) given from disparate, unknown sources. Perhaps a company might develop more advanced methods in order to help guage whether the vote is reliable or whether it is suspect (for example: one such method is commonly referred to as a „nofollow tag“ – a marker indicating that the vote should not be trusted).

    What many such algorithms have in common is that on a very basic level, they usually rely quite heavily on some sort of voting mechanism. This means they are fundamentally oriented towards populism – the most popular opinion is usually viewed as the most valid point of view. This approach is very much at odds with logic, the scientific method and other methods that have traditionally (for several centuries, at least) be used in academic institutions and similar „research“ settings. At their core, such populist algorithms are not „computational“ – since they rely not on any kind of technological solution to questions, but rather scan and tally up the views of a large number of human (and/or perhaps robotic) „users“. While such populist approaches are heralded as technologically advanced, they are actually – on a fundamental level – very simplistic. While I might employ such methods to decide which color of sugar-coated chocolate to eat, I doubt very much that I, personally, would rely on such methods to make more important – for example: „medical“ – decisions (such as whether or not to undergo surgery). I, personally, would not rely on such populist methods much more than I would rely on chance. As an example of the kind of errors that might arise from employing such populist methods, consider the rather simple and straightforward case that some of the people voting could in fact be color-blind.

    Yet that is just the beginning. Many more problems lurk under the surface, beyond the grasp of merely superficial thinkers. Take, for example, the so-called „bandwagon effect“ – namely, that many people are prone to fall into a sort of „follow the leader“ kind of „groupthink“. Similarly, it is quite plausible that such bandwagon effects could even influence not only people’s answers, but even also the kinds of questions they feel comfortable asking (see also my previous post). On a more advanced level, complex systems may be also be influenced by the elements they comprise. For example: While originally citation indexes were designed with the assumption that such citation data ought to be reliable, over the years it was demonstrated that such citations are indeed very prone to be corrupted by a wide variety of corruption errors and that citation analysis is indeed not at all a reliable method. While citation data may have been somewhat reliable originally, it became clear that eventually citation fraud corrupted the system.

     
  • Profile photo of nmw

    nmw 16:17:23 on 2016/02/20 Permalink
    Tags: , , , , computation, , , , , , , , , , , , , , ,   

    In Our Brains… 

    In our brains, almost everything is connected to the world outside of our brains. Thinking about artificial intelligence (AI), my friends Ted and Brandon are asking for help (@http://concerning.ai). In my humble opinion: If you want to „get somewhere“ then you need to think „outside of the box“.

    What I’m writing here has mainly to do with things Brandon and Ted talk about in episode 10. Also, in episodes 11 and 12, Brandon and Ted talk with Evan Prodromou, a „practitioner“ in the field. Evan points out (at least) two fascinating points: 1. Procedural code and 2. Training sets. Below, I will also talk about these two issues.

    When I said above that there is a need to „think out side of the box“, I was alluding to much larger systems than what is usually considered (note that Evan, Ted and Brandon also touched on a notion of „open systems“). For example: Language. So-called „natural language“ is extremely complex. To present just a shimmer of the enormous complexity of natural language, consider the „threshold anecdote“ Ted shared at the beginning of episode 11. A threshold is both a very concrete thing and also an abstract concept. When people use the term „threshold“, other people can only understand the meaning of the term by at the same time also considering the context in which the term is being used. This is for all practical purposes an intractable problem for any computational device which might be constructed by humans sometime in the coming century. Language itself does not exist in one person or one book, but it is something which is distributed among a large number of people belonging to the same linguistic community. The data is qualitative rather than qantitative. Only the most fantastically optimistic researchers would ever venture to try to „solve“ language computationally – and I myself was also once one such researcher. I doubt humans will ever be able to build such a machine… not only due to the vast resources it might require, but also because the nature of (human) natural language is orthogonal to the approach of „being solvable“ via procedural code.

    Another anecdote I have often used to draw attention to how ridiculous the aim to „solve language“ seems is Kurzweil’s emphasis on pattern recognition. Patterns can only be recognized if they have been previously defined. Keeping with another example from episode 11, it would require humans to walk from tree to tree and say „this is an ash tree“ and „that is not an ash tree“ over and over until the computational device were able to recognize some kind of pattern. However, the pattern recognized might be something like „any tree located at a listing of locations where ash trees grow“. Indeed: The hope that increasing computational resources might make pattern recognition easier underscores the notion that such „brute force“ procedures might be applied. Yet the machine would nonetheless not actually understand the term „ash tree“. A computer can recognize what an ash tree is IFF (if and only if) a human first defines the term. If a human must first define the term, then there is in fact no „artificial intelligence“ happening at all.

    I have a hunch that human intelligence has evolved according to entirely different laws – „laws of nature“ rather than „laws of computer science“ (and/or „mathematical logic“). Part of my thinking here is quite similar to what Tim Ferris has referred to as „not-to-do lists“ (see „The 9 Habits to Stop Now“). Similarly, it is well-known that Socrates referred to „divine signs“ which prevented him from taking one or another course of action. You might also consider (from the field of psychology) Kurt Lewin’s „Field Theory“ (in particular the “Force Field Analysis” of positive / negative forces) in this context, and/or (from the field of economics) the „random walk“ hypothesis. The basic idea is as follows: Our brains have evolved with a view towards being able to manage (or „deal with“) situations we have never experienced before. Hence „training sets“ are out of the question. We are required to make at best „educated“ guesses about what we should do in any moment. Language is a tool-set which has symbiotically evolved in our environment (much like the air we breathe is also conducive to our own survival). Moreover: Both we and our language (as also other aspects of our environment) continue to evolve. Taken to the ultimate extreme, this means that the coexistence of all things evolving in concert shapes the intelligence of each and every sub-system within the universe. To put it rather plainly: the evolution of birds and bees enables us to refer to them as birds and bees; the formation of rocks and stars enables us to refer to them as rocks and stars; and so on.

    In case you find all of this somewhat scientific theory too theoretical, please feel free to check out one of my recently launched projects – in particular the „How to Fail“ page … over at bestopopular.com (which also utilizes the „negative thinking“ approach described above).

     
  • Profile photo of feedwordpress

    feedwordpress 21:21:25 on 2013/10/24 Permalink
    Tags: , computation, , convergence, divergence, , , , , , recreation, recreational,   

    The End of the Media Convergence Hoax 

    For several years already we have been hearing a lot of hot air along the lines of “all media are the same bits and pieces of information”. This utter nonsense is going to come to a stop sooner or later — and not only do I wish it would be sooner, I also see signs that it will be soon.

    The primary sign I see is the divergence of data. Whereas machine readable formats are becoming increasingly semantically codified, non-machine readable formats are becoming increasingly raw. For example: when content on Facebook is “liked”, that is but one bit of information. Likewise, online friendship is nothing more than a simple switch that gets turned on or off. On the other hand, image files, video files, audio files — all not “machine readable” (meaning there is no code a machine could use to “understand” the meaning or significance of the individual bits — as there might be for “like” or “friendship”) is incessantly being upgraded to ever more high definition — meaning more bits per unit.

    This divergence is becoming increasingly obvious whereever there are limits to the volume of data that can be transported. A 4K screen will require such large amounts of data that transporting such large files over the Internet might become prohibitively expensive, especially in bandwidth-constrained environments.

    While at the turn of the millennium there was much talk about all bits being the same, it is becoming more and more apparent that whereas codified information belongs on a computer, audio and video files belong in different a setting — the “lean back” media center in the living room, not the “lean forward” business machine in the den.

    On one of my browser apps, I already have images turned off, because I am mostly interested in machine-readable text. I expect at some point the illustrated magazine will experience a comeback… perhaps not quite in paper, but in returning to the living room setting, and away from the environment of the more computational den.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Skip to toolbar