ČRNOBELO | BLACK & WHITE

Črnobelo je instalacija o skrajni obliki diferenciacije - polarizacije (dualizacije). Ker se vmes formira neprehodna ovira, temu rečemo konfrontacija. Beseda že nosi v sebi kal bojne linije - fronte.

Hkrati je to tudi razmislek o avtonomiji (objektov/subjektov) - torej tudi o "življenju". Vendar ne o življenju stroja, ampak metaforičnega objekta (le pogojno tudi subjekta).

Osnovna predpostavka pri izdelavi "avtonomnih" objektov je , da poiščemo najnižjo točko fascinacije, torej skoraj ničesar ne prikrijemo, in skušamo ravno še ugledati "čarobnost" nedoumljenega gibanja.

Med razmislekom o potrebnih (a ne zadostnih) virih za avtonomijo se vsiljuje pojem "algoritem" kot mehanizem prikrivanja, potujitve, translacije (smisla gibanja; tudi spremembe pomena - da, celo diskurzivnega polja). Matematično povedano gre za prenosno funkcijo (bolj ali manj kompleksno), ki sicer enostavno gibanje spremeni v "nerazumljivo". Stopenj nerazumljivosti je seveda mnogo, zanimajo nas tiste najbolj enostavne, ki še poskrbijo za to, da je rezultat pretvorbe "magičen".

Preprost primer: rokohitrec s premetavanjem dveh žogic ne počne nič posebnega, fascinacija in magija se začneta pri treh žogicah - opazovalcu se razumevanje premeščanja žogic iz roke v roko izmuzne. Torej: nerazumevanje = fascinacija & magija ...

Ampak za začetek ...

Iz Wikipedije.

V matematiki in računalništvu je algoritem (beseda izpeljanka iz imena perzijskega matematika Al-Khwarizmija) končen nabor natančno definiranih navodil za izvedbo neke naloge, ki se glede na začetno stanje konča v ustreznem prepoznavnem končno stanje (v nasprotju s hevristiko). Koncept algoritma je pogosto ponazorjen s primerom recepta, čeprav je veliko algoritmov veliko bolj zapletenih; algoritmi imajo pogosto korake, ki se ponavljajo (iterirajo) ali zahtevajo odločitve (kot je logika ali primerjava), dokler naloga ni dokončana.

Med različnimi paradigmami algoritmov izberemo tole:

Verjetnostna in hevristična paradigma. Algoritmi, ki pripadajo temu razredu, bolj ohlapno ustrezajo definiciji algoritma.

Probabilistični algoritmi so tisti, ki nekatere odločitve sprejemajo naključno (ali psevdonaključno); za nekatere težave je dejansko mogoče dokazati, da morajo najhitrejše rešitve vključevati nekaj naključnosti.

Genetski algoritmi poskušajo najti rešitve za težave s posnemanjem bioloških evolucijskih procesov, s ciklom naključnih mutacij, ki dajejo zaporedne generacije 'rešitev'. Tako posnemajo razmnoževanje in »preživetje najmočnejših«. V genetskem programiranju je ta pristop razširjen na algoritme, tako da algoritem sam obravnava kot "rešitev" problema.

Obstajajo tudi hevristični algoritmi, katerih splošni namen ni iskanje optimalne rešitve, ampak približne rešitve, kjer čas ali sredstva za iskanje popolne rešitve niso praktični. Primer tega bi bilo lokalno iskanje, tabu iskanje ali simulirani algoritmi žarjenja, razred hevrističnih verjetnostnih algoritmov, ki spreminjajo rešitev problema za naključno količino. Ime 'simulirano žarjenje' namiguje na metalurški izraz, ki pomeni segrevanje in ohlajanje kovine, da se doseže brez napak. Namen naključne variance je najti rešitve, ki so blizu globalno optimalnim, ne pa le lokalno optimalnim, ideja pa je, da se bo naključni element zmanjšal, ko se algoritem umiri do rešitve.

Drug način za razvrščanje algoritmov je po izvedbi. Rekurzivni algoritem je tisti, ki večkrat kliče (sklicuje) samega sebe, dokler se določen pogoj ne ujema, kar je metoda, ki je običajna za funkcionalno programiranje. O algoritmih se običajno razpravlja ob predpostavki, da računalniki izvajajo eno navodilo algoritma naenkrat. Ti računalniki se včasih imenujejo serijski računalniki. Algoritem, zasnovan za takšno okolje, se imenuje serijski algoritem, v nasprotju z vzporednimi algoritmi, ki izkoriščajo računalniške arhitekture, kjer lahko več procesorjev hkrati dela na problemu. V to kategorijo bi verjetno spadali tudi različni hevristični algoritmi, saj njihovo ime (npr. genetski algoritem) opisuje njegovo izvedbo.

Podvrsta vzporednih algoritmov, porazdeljeni algoritmi, so algoritmi, zasnovani za delo v računalništvu v gruči in porazdeljenih računalniških okoljih, kjer je treba obravnavati dodatne pomisleke glede "klasičnih" vzporednih algoritmov.

Za našo uporabo se bomo osredotočili na hevristične/vzporedne algoritme in se čim bolj držali stran od računalnikov... Osredotočili se bomo na elektromehanske avtomate - tako imenovano mehatroniko ali celo na čisto mehanske sisteme. Iskali bomo med roboti BEAM. Pionir na tem področju je bil Walter Gray Walter.

Iz Wikipedije...

Najbolj znano delo Graya Walterja je bila njegova izdelava nekaterih prvih elektronskih avtonomnih robotov. Želel je dokazati, da lahko bogate povezave med majhnim številom možganskih celic povzročijo zelo zapleteno vedenje - v bistvu je skrivnost delovanja možganov v tem, kako so povezani. Njegova prva robota, imenovana Elmer in Elsie, sta bila izdelana med letoma 1948 in 1949 in sta bila zaradi svoje oblike in počasnega gibanja pogosto opisana kot želva – in ker sta nas 'učila' o skrivnostih organizacije in

Robotika BEAM (akronim za biologijo, elektroniko, estetiko in mehaniko) je slog robotike, ki uporablja preprosta analogna vezja namesto mikroprocesorja. Večina robotov BEAM je v primerjavi s tradicionalnimi mobilnimi roboti nenavadno enostavna zasnova in daje kompromis med prilagodljivostjo in robustnostjo delovanja.

Za razliko od mnogih drugih vrst robotov, ki jih krmilijo mikrokrmilniki, so roboti BEAM zgrajeni na principu uporabe več preprostih vedenj, povezanih neposredno s senzorskimi sistemi z malo pogojevanja signala. Ta oblikovalska filozofija je natančno odmevna v klasični knjigi "Vozila: Eksperimenti v sintetični psihologiji", ki skozi vrsto miselnih eksperimentov raziskuje razvoj kompleksnega vedenja robotov prek preprostih zaviralnih/ekscitornih povezav senzorjev z aktuatorji.

Black and White is an installation about the most radical point of differentiation - the polarization (dualization). This means confrontation. It is an element of war - the frontline.

At the same time it is a debate about the autonomy (of objects/ subjects) - also of "life" itself. But not of the life of a machine, but of the metaphorical object (conditionally also of the subject).

The basic rule in building the autonomous objects is to find the lowest point of fascination - we do not hide almost nothing, but can still observe the "magic" of the mistified movement.

During the reflection on the necessary (but not sufficient) resources for autonomy, the concept of "algorithm" is imposed as a mechanism of concealment, travel, translation (the meaning of movement; also the change of meaning - yes, even the discursive field). Mathematically speaking, it is a transfer function (more or less complex) that turns an otherwise simple movement into "incomprehensible". Of course, there are many levels of incomprehensibility, we are interested in the simplest ones, which still ensure that the result of the conversion is "magical".

A simple example: a handball player does nothing special by tossing two balls, the fascination and magic begins with three balls - the observer's understanding of the movement of balls from hand to hand slips away. So: Misunderstanding = Fascination & Magic...

But to begin with...

From Wikipedia, the free encyclopedia.

In mathematics and computer science an algorithm (the word is derived from the name of the Persian mathematician Al-Khwarizmi), is a finite set of well-defined instructions for accomplishing some task which, given an initial state, will terminate in a corresponding recognizable end-state (contrast with heuristic). The concept of an algorithm is often illustrated by the example of a recipe, although many algorithms are much more complex; algorithms often have steps that repeat (iterate) or require decisions (such as logic or comparison) until the task is completed.

Among different paradigms of algorithms we choose this one:

The probabilistic and heuristic paradigm. Algorithms belonging to this class fit the definition of an algorithm more loosely.

Probabilistic algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proved that the fastest solutions must involve some randomness.

Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of 'solutions'. Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a 'solution' to a problem.

Also there are heuristic algorithms, whose general purpose is not to find a optimal solution, but an approximate solution where the time or resources to find a perfect solution are not practical. An example of this would be local search, taboo search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name 'simulated annealing' alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution.

Another way to classify algorithms is by implementation. A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition matches, which is a method common to functional programming. Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms, which take advantage of computer architectures where several processors can work on a problem at the same time. The various heuristic algorithms would probably also fall into this category, as their name (e.g. a genetic algorithm) describes its implementation.

A subtype of parallel algorithms, distributed algorithms are algorithms designed to work in cluster computing and distributed computing environments where additional concerns over "classical" parallel algorithms need to be addressed.

For our use we will focus on heuristic/parallel algorithms and stay away from computers as much as possible... We will focus on electro-mechanical automata - the so-called mehatronics or even to the purely mechanic systems. We'll search among the BEAM robotics. The pioneer in this field was Walter Grey Walter.

From Wikipedia, the free encyclopedia...

Grey Walter's most famous work was his construction of some of the first electronic autonomous robots. He wanted to prove that rich connections between a small number of brain cells could give rise to very complex behaviors - essentially that the secret of how the brain worked lay in how it was wired up. His first robots, named Elmer and Elsie, were constructed between 1948 and 1949 and were often described as tortoises due to their shape and slow rate of movement - and because they 'taught us' about the secrets of organisation and life.

BEAM robotics (acronym for Biology, Electronics, Aesthetics, and Mechanics) is a style of robotics that uses simple analog circuits instead of a microprocessor. Most BEAM robots are unusually simple in design compared to traditional mobile robots, and trade off flexibility in purpose for robustness of performance.

Unlike many other types of robots controlled by microcontrollers, BEAM robots are built on the principle of using multiple simple behaviors linked directly to sensor systems with little signal conditioning. This design philosophy is closely echoed in the classic book "Vehicles: Experiments in Synthetic Psychology", which through a series of thought experiments explores the development of complex robot behaviors through simple inhibitory/excitory sensor links to the actuators.