Bir önceki yazı için Endüstriyel İdeolojilerin Temelleri: Senkronizasyon sayfasını okuyunuz.
The Third Wave, Alvin Toffler, syf. 68-70
Bölünmüş üretim / tüketim, 2. dalga toplumlarında "makro manyaklığı" denebilecek, büyüklük ve büyümeye olan Teksasvarı bir bağımlılığın, bir tür hastalığın da ortaya çıkmasına sebep oldu. Düşünce şöyleydi: madem ki fabrikalarda bir malın üretim süresinin, miktarının arttırılması (longer production runs) daha az birim fiyatı anlamına geliyordu, o zaman ölçeğin büyütülmesi diğer aktivitelerde aynı şekilde kazanımlar sağlayabilirdi. Bu mantık "büyük" kelimesi ile "daha verimli" kelimesini eşanlamlı görmeye başlayacaktı. Sonuçta maksimizasyon 2. dalganın 5. ana prensibi haline geldi.
Şehirler ve ülkeler en yüksek gökdelene, en büyük baraja, dünyanın en büyük mini-golf sahasına sahip olmakla övünüyorlardı. Bunun ötesinde büyüklük, büyümenin bir sonucu olduğuna göre, çoğu endüstriyel devlet, holding ve diğer organizasyonlar büyüme idealinin fanatik takipçileri haline geldiler.
Mathushita Elektrik Şirketi'ndeki Japon işçiler ve yöneticiler topluca her gün koro olarak şu şarkıyı söylerler:
.. Üretimi arttırmak için
yapabildiğimizin en iyisini yapıyoruz,
Mallarımızı tüm dünyaya gönderiyoruz,
Bitmez tükenmez bir şekilde,
Aynen pınardan dışarı fışkıran şu gibi,
Büyü endüstri, Büyü, Büyü, Büyü!
Harmoni ve samimiyet!
Matsushita Elektrik!
1960'da Amerika klasik endüstriyelleşme evresini tamamlayıp 3. dalganın etkilerini hissetmeye başladığında, en büyük 50 holdinginin her birinin ortalama 80,000 çalışanı vardı. General Motors şirketi tek başına 595,000 çalışana, Vail'in AT&T şirketi ise 736,000 çalışana sahipti. Bu durum, o zamanlarda ortalama bir ev nüfusunun 3.3 kişi olduğundan hareketle, 2,000,000'dan daha fazla insanın geçim için bu tek şirkete bağımlı olduğu anlamına geliyordu. Bahsedilen rakamın büyüklüğünü şöyle perspektife koyalım: Üstteki insan sayısı Washington ve Hamilton ülkeyi kurarken ABD nüfusunun neredeyse yarısına eşitti [..]
There is nothing new under the sun but there are lots of old things we don't know. [Ambrose Bierce]
Wednesday, November 25, 2015
Tuesday, November 17, 2015
Endüstriyel İdeolojilerin Temelleri: Senkronizasyon
Bir önceki yazı için Endüstriyel İdeolojilerin Temelleri: Standardizasyon sayfasını okuyunuz.
The Third Wave, Alvin Toffler, syf. 64-66
Üretim ve tüketim arasındaki sürekli büyüyen mesafe, insanların zamana bakışında büyük değişimlere sebep oldu. Serbest olsun, kontrollü olsun, piyasanın hakim olduğu her toplumda zaman = para desturu hakim olmuştur, çünkü endüstriyel ekonomilerde büyük paralar ödenmiş makinaların boş durmasına izin verilemezdi. Bu makinalar da, eşyanın tabiatına göre, kendi ritimlerine göre hareket ediyorlardı; işte bu faktör, 2. dalga için önemli bir prensibin daha ortaya çıkmasına sebep olmuştur: Senkronizasyon.
Aslında en erken insan topluluklarında bile işin belli bir zaman akışı içinde organize edilmesi kavramı yabancı değildi. Mesela avcıların bir avı yakalaması için, balıkçıların ağları, kürekleri çekmesi için birbirlerine zamansal olarak uyumlu halde çalışmaları gerekiyordu [..] Fakat, en azından 2. dalga'nın makinaları gelinceye kadar, toplumlardaki çoğu senkronizasyon organik ve doğal olarak ortaya çıkmıştır. Bu senkronizasyon mevsimlerin değişimi, dünyanın dönüşü, kalp atışı gibi biyolojik süreçlerin ritmini takip etmiştir.
2. dalga toplumları ise makinaların ritmine göre hareket etmeye başladı.
The Third Wave, Alvin Toffler, syf. 64-66
Üretim ve tüketim arasındaki sürekli büyüyen mesafe, insanların zamana bakışında büyük değişimlere sebep oldu. Serbest olsun, kontrollü olsun, piyasanın hakim olduğu her toplumda zaman = para desturu hakim olmuştur, çünkü endüstriyel ekonomilerde büyük paralar ödenmiş makinaların boş durmasına izin verilemezdi. Bu makinalar da, eşyanın tabiatına göre, kendi ritimlerine göre hareket ediyorlardı; işte bu faktör, 2. dalga için önemli bir prensibin daha ortaya çıkmasına sebep olmuştur: Senkronizasyon.
Aslında en erken insan topluluklarında bile işin belli bir zaman akışı içinde organize edilmesi kavramı yabancı değildi. Mesela avcıların bir avı yakalaması için, balıkçıların ağları, kürekleri çekmesi için birbirlerine zamansal olarak uyumlu halde çalışmaları gerekiyordu [..] Fakat, en azından 2. dalga'nın makinaları gelinceye kadar, toplumlardaki çoğu senkronizasyon organik ve doğal olarak ortaya çıkmıştır. Bu senkronizasyon mevsimlerin değişimi, dünyanın dönüşü, kalp atışı gibi biyolojik süreçlerin ritmini takip etmiştir.
2. dalga toplumları ise makinaların ritmine göre hareket etmeye başladı.
Wednesday, November 4, 2015
Endüstriyel İdeolojilerin Temelleri: Standardizasyon
Bir önceki yazı için Endüstriyel İdeolojilerin Temelleri: Tek Alanda Çalışmak sayfasını okuyunuz.
The Third Wave, Alvin Toffler, syf. 60-62
Pek çok kişinin en iyi bildiği 2. dalga kavramı "standardizasyon" kavramıdır. Endüstriyel toplumların birbirine benzeyen milyonlarca mal ürettiğini herkes bilir. Fakat pek az kişi, piyasa önem kazanmaya başladıktan sonra Coca-Cola şişeleri, elektrik ampulleri ve araba akşamından daha fazlasını standardize ettiğimizin farkındadır. Biz bu prensibi hayatın pek çok diğer alanına da uyguladık; bu akımın öncülerinden biri de ATT şirketini bir dev haline getiren Theodore Vail'den başkası değildi.
1860 yılında günlük işi tren hatlarından mektup ulaştırmak olan Vail, mektupların hiçbir zaman sonuç adresine aynı yollardan gitmediğini farketti. Çuvallarca mektup ileri, geri hareket ediyor, bazen sonuca varması aylar alabiliyordu. Vail, standardize edilmiş ulaşım yolu kavramını keşfetti, ki böylece aynı adrese giden her mektup hep aynı yolu takip edecekti. Bu sayede Vail postane kavramına devrimsel bir değişiklik getirdi. ATT'yi kurduktan sonra amacı Amerika'daki her eve aynı tip telefonu koymak olacaktı [..]
Vail, endüstriyel toplumların "Büyük Standartçıları" kategorisindeki ünlü kişiliklerden bir tanesidir. Eskiden makinist olan, sonra bir ideolog / danışmana dönüşen Fredrick Taylor bu kişilerden bir diğeriydi. Taylor yapılan bir işin, emeğin her adımını standardize ederek iş kavramını bilimsel hale getirebileceğini düşünmüştü. 20. yüzyılın ilk onyıllarında Taylor her işi yapmanın en iyi (standart) bir yolu, onu yaparken kullanılacak en iyi (standart) bir aracın, ve o işin tamamlanacağı kararlaştırılmış (standart) bir zaman dilimi olması gerektiğini iddia etti.
Pek çok kişinin en iyi bildiği 2. dalga kavramı "standardizasyon" kavramıdır. Endüstriyel toplumların birbirine benzeyen milyonlarca mal ürettiğini herkes bilir. Fakat pek az kişi, piyasa önem kazanmaya başladıktan sonra Coca-Cola şişeleri, elektrik ampulleri ve araba akşamından daha fazlasını standardize ettiğimizin farkındadır. Biz bu prensibi hayatın pek çok diğer alanına da uyguladık; bu akımın öncülerinden biri de ATT şirketini bir dev haline getiren Theodore Vail'den başkası değildi.
1860 yılında günlük işi tren hatlarından mektup ulaştırmak olan Vail, mektupların hiçbir zaman sonuç adresine aynı yollardan gitmediğini farketti. Çuvallarca mektup ileri, geri hareket ediyor, bazen sonuca varması aylar alabiliyordu. Vail, standardize edilmiş ulaşım yolu kavramını keşfetti, ki böylece aynı adrese giden her mektup hep aynı yolu takip edecekti. Bu sayede Vail postane kavramına devrimsel bir değişiklik getirdi. ATT'yi kurduktan sonra amacı Amerika'daki her eve aynı tip telefonu koymak olacaktı [..]
Vail, endüstriyel toplumların "Büyük Standartçıları" kategorisindeki ünlü kişiliklerden bir tanesidir. Eskiden makinist olan, sonra bir ideolog / danışmana dönüşen Fredrick Taylor bu kişilerden bir diğeriydi. Taylor yapılan bir işin, emeğin her adımını standardize ederek iş kavramını bilimsel hale getirebileceğini düşünmüştü. 20. yüzyılın ilk onyıllarında Taylor her işi yapmanın en iyi (standart) bir yolu, onu yaparken kullanılacak en iyi (standart) bir aracın, ve o işin tamamlanacağı kararlaştırılmış (standart) bir zaman dilimi olması gerektiğini iddia etti.
Friday, October 30, 2015
Endüstriyel İdeolojilerin Temelleri: Tek Alanda Çalışmak
Bir önceki yazı için Endüstriyel İdeolojilerin Temelleri: Konsentrasyon sayfasını okuyunuz.
Alvin Toffler, The Third Wave, syf. 62-64
Tüm 2. dalga toplumlarında yer etmiş bir diğer büyük prensip, sadece tek bir alanda çalışma (specialization) prensibidir. Bu tür topluluklar konuşma dillerinde, yaşam tarzlarında, dinlenme, eğlenmede çeşitliliği yokederken, aynı anda çalışma dünyasında [kutulara ayrılmış] bir çeşitliliğe gitme ihtiyacı hissediyorlardı. 2. dalga, işbölümü kavramını güçlendirip, o her şeyden anlayan (jack-of-all-trades) köylünün yerine sadece tek bir işe odaklı ketum memur / emekçi tipini ön plana çıkarttı. Bu kişi, o tek şeyi, tekrar tekrar, arka arkaya, aynen Taylor'un tavsiye ettiği gibi yapmakla yükümlü olacaktı [..]
Henry Ford, ünlü Model-T arabasını üretmek için fabrikasında 7882 farklı operasyonun gerektiğini hesaplarken işte bu mantığı takip ediyordu. Ford'un hesabına göre bu operasyonların sadece 949 güçlü, 3448'i her uzvu yerinde insana ihtiyaç duyuyordu, ve gayet soğukkanlı bir şekilde Ford şöyle devam etmişti "kalan işlerin ise 670'si bacağı olmayanlar, 2637'si tek bacağı olanlar, 2'si kolu olmayanlar, 715'ı tek kollu insanlar, 10'u da kör insanlar tarafından rahatça yapılabilir". Özetle tek alanda çalışma kavramının artık tam insana bile ihtiyacı yoktu. Sadece "kısmi insanlar" da onun işlemesi için yeterliydi. Aşırı tek alancılığın bu kadar gaddar daha iyi bir örneği herhalde başka yerde bulunamaz.
Genellikle kapitalizm ile bağdaşlaştırılan bu durum, sosyalizmin de en temel özelliklerinden biridir, çünkü aşırı tek alancılık üretimin tüketimden boşandığı her toplumda otomatik olarak ortaya çıkar. SSCB, Macaristan, Polonya, Doğu Almanya'daki fabrikalar, aynen Çalışma Bakanlığı tam 20.000 tane farklı iş kolu tanımlamış ABD ve Japonya gibi fabrikalarında çetrefil "tek alancı" teknikleri kullanmaya muhtaçtılar.
Tüm 2. dalga toplumlarında yer etmiş bir diğer büyük prensip, sadece tek bir alanda çalışma (specialization) prensibidir. Bu tür topluluklar konuşma dillerinde, yaşam tarzlarında, dinlenme, eğlenmede çeşitliliği yokederken, aynı anda çalışma dünyasında [kutulara ayrılmış] bir çeşitliliğe gitme ihtiyacı hissediyorlardı. 2. dalga, işbölümü kavramını güçlendirip, o her şeyden anlayan (jack-of-all-trades) köylünün yerine sadece tek bir işe odaklı ketum memur / emekçi tipini ön plana çıkarttı. Bu kişi, o tek şeyi, tekrar tekrar, arka arkaya, aynen Taylor'un tavsiye ettiği gibi yapmakla yükümlü olacaktı [..]
Henry Ford, ünlü Model-T arabasını üretmek için fabrikasında 7882 farklı operasyonun gerektiğini hesaplarken işte bu mantığı takip ediyordu. Ford'un hesabına göre bu operasyonların sadece 949 güçlü, 3448'i her uzvu yerinde insana ihtiyaç duyuyordu, ve gayet soğukkanlı bir şekilde Ford şöyle devam etmişti "kalan işlerin ise 670'si bacağı olmayanlar, 2637'si tek bacağı olanlar, 2'si kolu olmayanlar, 715'ı tek kollu insanlar, 10'u da kör insanlar tarafından rahatça yapılabilir". Özetle tek alanda çalışma kavramının artık tam insana bile ihtiyacı yoktu. Sadece "kısmi insanlar" da onun işlemesi için yeterliydi. Aşırı tek alancılığın bu kadar gaddar daha iyi bir örneği herhalde başka yerde bulunamaz.
Genellikle kapitalizm ile bağdaşlaştırılan bu durum, sosyalizmin de en temel özelliklerinden biridir, çünkü aşırı tek alancılık üretimin tüketimden boşandığı her toplumda otomatik olarak ortaya çıkar. SSCB, Macaristan, Polonya, Doğu Almanya'daki fabrikalar, aynen Çalışma Bakanlığı tam 20.000 tane farklı iş kolu tanımlamış ABD ve Japonya gibi fabrikalarında çetrefil "tek alancı" teknikleri kullanmaya muhtaçtılar.
Monday, October 19, 2015
Endüstriyel İdeolojilerin Temelleri: Konsentrasyon
Bir önceki yazı için 3.Dalga ve Bilgi Toplumu sayfasını okuyunuz.
The Third Wave, Alvin Toffler, syf. 67-68
[Endustriyel] piyasaların yükselişi bir 2. dalga kuralını daha ortaya çıkardı: Konsentrasyon prensibi.
1. dalga toplumları çoğunlukla dağınık halde duran enerji kaynaklarını kullanıyorlardı. 2. dalga toplumları, neredeyse tamamen, yüksek ölçüde konsantre/odaklı halde duran fosil yakıtlarına bağımlı hale geldiler.
Fakat 2. dalga enerji haricindeki başka şeyleri de "odaklı" hale getirdi. İnsan nüfusunu da "odaklaştırdı"; kırsal bölgelerdeki insanları yerlerinden çıkartarak onları çok odaklı haldeki şehir merkezlerine taşıdı.
İş kavramını bile odaklı hale getirdi. 1. dalga toplumlarında iş pek çok değişik yerde, evde, köyde, çiftlikte yapılabilirken, 2. dalga endüstriyel toplumlarda iş neredeyse tamamen konsantre halde fabrikalarda yapılır hale gelecekti - bu fabrikalarda binlerce insan bir araya gelerek işlerini yapmak zorundaydılar.
Dahası da var: İş ve enerji haricinde başka şeyler de konsantre oldu: New Society adındaki bir sosyal bilimler dergisinde yazan Stan Cohen, şunları söylüyordu: "Endüstriyelleşme öncesi fakirler, mahalledeki birinin, akrabalarının yanında kalır, suçlular ise cezalandırılırdı; ağır ceza, ya da yerleşim merkezlerinde dışarı atılarak. Zihni özürlüler kendi aileleri ile kalır, fakir iseler mahalleli tarafından bakılırdı. Ve bütün bu gruplar toplulukta dağınık bir halde durmaktaydılar.
The Third Wave, Alvin Toffler, syf. 67-68
[Endustriyel] piyasaların yükselişi bir 2. dalga kuralını daha ortaya çıkardı: Konsentrasyon prensibi.
1. dalga toplumları çoğunlukla dağınık halde duran enerji kaynaklarını kullanıyorlardı. 2. dalga toplumları, neredeyse tamamen, yüksek ölçüde konsantre/odaklı halde duran fosil yakıtlarına bağımlı hale geldiler.
Fakat 2. dalga enerji haricindeki başka şeyleri de "odaklı" hale getirdi. İnsan nüfusunu da "odaklaştırdı"; kırsal bölgelerdeki insanları yerlerinden çıkartarak onları çok odaklı haldeki şehir merkezlerine taşıdı.
İş kavramını bile odaklı hale getirdi. 1. dalga toplumlarında iş pek çok değişik yerde, evde, köyde, çiftlikte yapılabilirken, 2. dalga endüstriyel toplumlarda iş neredeyse tamamen konsantre halde fabrikalarda yapılır hale gelecekti - bu fabrikalarda binlerce insan bir araya gelerek işlerini yapmak zorundaydılar.
Dahası da var: İş ve enerji haricinde başka şeyler de konsantre oldu: New Society adındaki bir sosyal bilimler dergisinde yazan Stan Cohen, şunları söylüyordu: "Endüstriyelleşme öncesi fakirler, mahalledeki birinin, akrabalarının yanında kalır, suçlular ise cezalandırılırdı; ağır ceza, ya da yerleşim merkezlerinde dışarı atılarak. Zihni özürlüler kendi aileleri ile kalır, fakir iseler mahalleli tarafından bakılırdı. Ve bütün bu gruplar toplulukta dağınık bir halde durmaktaydılar.
Saturday, October 3, 2015
Image Segmentation - Region Growing Algorithm
1 - Introduction and problem definition
1.1 - Introduction
Image segmentation is an important process in Computer Vision that is used for several operations as edge detection, classification, 3D reconstruction, etc.. The main goal of image segmentation is to cluster pixels into regions. This clustering image pixels into image regions in turns convert the image into a representation that is more meaningful and easier to analyse. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. Image segmentation has broad range of applicability in different fields of science ans engineering.
In this lab work, we implement the region growing algorithm which is one of the basic process of partitioning a digital image and then analyse the design and implementation of it. Finally, we compare the region growing algorithm with other image segmentation algorithms. We describe also about the organization and development phase of the lab work.
1.2 - Problem definition
Our lab work problem asks for performing image segmentation over different image representation and check the result. We implement our image segmentation algorithm over gray level images and RGB color space images to cluster into different image regions. And then we compare our clustering result with Fuzzy C-Means (FCM) clustering algorithm.
1.1 - Introduction
Image segmentation is an important process in Computer Vision that is used for several operations as edge detection, classification, 3D reconstruction, etc.. The main goal of image segmentation is to cluster pixels into regions. This clustering image pixels into image regions in turns convert the image into a representation that is more meaningful and easier to analyse. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. Image segmentation has broad range of applicability in different fields of science ans engineering.
In this lab work, we implement the region growing algorithm which is one of the basic process of partitioning a digital image and then analyse the design and implementation of it. Finally, we compare the region growing algorithm with other image segmentation algorithms. We describe also about the organization and development phase of the lab work.
1.2 - Problem definition
Our lab work problem asks for performing image segmentation over different image representation and check the result. We implement our image segmentation algorithm over gray level images and RGB color space images to cluster into different image regions. And then we compare our clustering result with Fuzzy C-Means (FCM) clustering algorithm.
Friday, July 24, 2015
What is Machine Learning? - Part II
Linear Regression with Multiple Variables
Multiple Features
Linear regression with multiple variables is also known as "multivariable linear regression." We now introduce notation for equations where we can have any number of input variables.
$$ \begin{align*} x_j^{(i)} &= \text{value of feature } j \text{ in the }i^{th}\text{ training example} \newline x^{(i)}& = \text{the column vector of all the feature inputs of the }i^{th}\text{ training example} \newline m &= \text{the number of training examples} \newline n &= \left| x^{(i)} \right| \; \text{(the number of features)} \end{align*} $$
Now define the multivariable form of the hypothesis function as follows, accomodating these multiple features:
$$ h_\theta (x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + \cdots + \theta_n x_n $$
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
$$ \begin{align*} h_\theta(x) = \begin{bmatrix} \theta_0 \hspace{2em} \theta_1 \hspace{2em} ... \hspace{2em} \theta_n \end{bmatrix} \begin{bmatrix} x_0 \newline x_1 \newline \vdots \newline x_n \end{bmatrix} = \theta^T x \end{align*} $$
This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more. [Note: So that we can do matrix operations with theta and x, we will set $x^{(i)}_0 = 1$, for all values of $i$. This makes the two vectors 'theta' and $x^{(i)}$ match each other element-wise (that is, have the same number of elements: $n + 1$).]
Now we can collect all $m$ training examples each with $n$ features and record them in an $n+1$ by $m$ matrix. In this matrix we let the value of the subscript (feature) also represent the row number (except the initial row is the "zeroth" row), and the value of the superscript (the training example) also represent the column number, as shown here:
Multiple Features
Linear regression with multiple variables is also known as "multivariable linear regression." We now introduce notation for equations where we can have any number of input variables.
$$ \begin{align*} x_j^{(i)} &= \text{value of feature } j \text{ in the }i^{th}\text{ training example} \newline x^{(i)}& = \text{the column vector of all the feature inputs of the }i^{th}\text{ training example} \newline m &= \text{the number of training examples} \newline n &= \left| x^{(i)} \right| \; \text{(the number of features)} \end{align*} $$
Now define the multivariable form of the hypothesis function as follows, accomodating these multiple features:
$$ h_\theta (x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + \cdots + \theta_n x_n $$
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
$$ \begin{align*} h_\theta(x) = \begin{bmatrix} \theta_0 \hspace{2em} \theta_1 \hspace{2em} ... \hspace{2em} \theta_n \end{bmatrix} \begin{bmatrix} x_0 \newline x_1 \newline \vdots \newline x_n \end{bmatrix} = \theta^T x \end{align*} $$
This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more. [Note: So that we can do matrix operations with theta and x, we will set $x^{(i)}_0 = 1$, for all values of $i$. This makes the two vectors 'theta' and $x^{(i)}$ match each other element-wise (that is, have the same number of elements: $n + 1$).]
Now we can collect all $m$ training examples each with $n$ features and record them in an $n+1$ by $m$ matrix. In this matrix we let the value of the subscript (feature) also represent the row number (except the initial row is the "zeroth" row), and the value of the superscript (the training example) also represent the column number, as shown here:
Monday, July 20, 2015
What is Machine Learning?
Two definitions of Machine Learning are offered. Arthur Samuel described it as: "the field of study that gives computers the ability to learn without being explicitly programmed." This is an older, informal definition.
Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
Example: playing checkers.
E = the experience of playing many games of checkers
T = the task of playing checkers.
P = the probability that the program will win the next game.
Supervised Learning
In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.
Supervised learning problems are categorized into "regression" and "classification" problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
Example:
Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.
Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
Example: playing checkers.
E = the experience of playing many games of checkers
T = the task of playing checkers.
P = the probability that the program will win the next game.
Supervised Learning
In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.
Supervised learning problems are categorized into "regression" and "classification" problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
Example:
Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.
Thursday, June 11, 2015
Robot Navigation - Q learning algorithm
Objective
The aim of this lab is to understand the reinforcement learning subject of the autonomous robots course and implement a reinforcement learning algorithm to learn a policy that moves a robot to a goal position. The algorithm is the Q-learning algorithm and it will be implemented in Matlab.
1 - Introduction
The reinforcement learning algorithm does not force the robot to plan path by using any path planning algorithm, rather the algorithm learns optimal solution by randomly moving inside map for several times. It is an approximation of natural learning process, where unknown problem is solved just by trial and error method. The following sections will briefly discuss about the implementation and the results obtained by the algorithm.
Environment: The environment used for this lab experiment is shown below.
States and Actions: The size of the given environment is 20$\times$14 = 280 states. The robot can only do 4 different actions: ←, ↑, →, ↓. Thus, the size of the Q matrices would be 280$\times$4 = 1120 cells.
Dynamics: Dynamics make the robot move towards a direction according to the actions. The robot will move one cell per iteration to the direction of the action that we select, unless there is an obstacle or the wall in front of it, in which case it will stay in the same position.
Reinforcement function: Reinforcement function assigns reward at each cell, +1 for goal cell and -1 otherwise.
The aim of this lab is to understand the reinforcement learning subject of the autonomous robots course and implement a reinforcement learning algorithm to learn a policy that moves a robot to a goal position. The algorithm is the Q-learning algorithm and it will be implemented in Matlab.
1 - Introduction
The reinforcement learning algorithm does not force the robot to plan path by using any path planning algorithm, rather the algorithm learns optimal solution by randomly moving inside map for several times. It is an approximation of natural learning process, where unknown problem is solved just by trial and error method. The following sections will briefly discuss about the implementation and the results obtained by the algorithm.
Environment: The environment used for this lab experiment is shown below.
Figure-1: Environment used for the implementation.
States and Actions: The size of the given environment is 20$\times$14 = 280 states. The robot can only do 4 different actions: ←, ↑, →, ↓. Thus, the size of the Q matrices would be 280$\times$4 = 1120 cells.
Dynamics: Dynamics make the robot move towards a direction according to the actions. The robot will move one cell per iteration to the direction of the action that we select, unless there is an obstacle or the wall in front of it, in which case it will stay in the same position.
Reinforcement function: Reinforcement function assigns reward at each cell, +1 for goal cell and -1 otherwise.
Sunday, May 31, 2015
Implementing Object Detection Based on Color in Webot Simulator for E-puck
This project was implemented by Richa AGARWAL, Taner GUNGOR and Pramita WINATA.
Abstract-Object detection and recognition is a challenging task in computer vision systems. So it was decided to work with E-puck for the same. But using a real e-puck connected with the system through bluetooth it is diffucult to transfer images captured by the robot's camera. So, it was decided to use Webot simulator for E-puck robot to develope and test the algorithm to detect objects using the color of an object. Where robot scans for the object if detects the goal, it moves in the direction of goal avoiding obstacles, else moves randomly in the arena looking for the goal (red object). The most relevant aspects of the simulator and implementation are explained.
Keywords-Webot simulator, e-puck, path planning
INTRODUCTION
We are implementing a simple object detection algorithm in Webot simulator for E-puck using C controller. The algorithm is designed to detect red objects using E-puck's camera. It is easier to control and grab images from E-puck robot using Webot simulator and controler.
1 - WEBOTS SIMULATOR
Webots is a development environment used to model, program and simulate mobile robots. With Webots the user can design complex robotic setups, with one or several, similar or different robots, in a shared environment. The properties of each object, such as shape, color, texture, mass, friction, etc., are chosen by the user. A large choice of simulated sensors and actuators is available to equip each robot. The robot controllers can be programmed with the built-in IDE or with third party development environments. The robot behavior can be tested in physically realistic worlds. The controller programs can optionally be transferred to commercially available real robots. Webots is used by over many universities and research centers worldwide. The development time you save is enormous.
Webots allows you to perform 4 basic stages in the development of a robotic project Model, Program, Simulate and transfer as depicted on the Fig. 1.
Abstract-Object detection and recognition is a challenging task in computer vision systems. So it was decided to work with E-puck for the same. But using a real e-puck connected with the system through bluetooth it is diffucult to transfer images captured by the robot's camera. So, it was decided to use Webot simulator for E-puck robot to develope and test the algorithm to detect objects using the color of an object. Where robot scans for the object if detects the goal, it moves in the direction of goal avoiding obstacles, else moves randomly in the arena looking for the goal (red object). The most relevant aspects of the simulator and implementation are explained.
Keywords-Webot simulator, e-puck, path planning
INTRODUCTION
We are implementing a simple object detection algorithm in Webot simulator for E-puck using C controller. The algorithm is designed to detect red objects using E-puck's camera. It is easier to control and grab images from E-puck robot using Webot simulator and controler.
1 - WEBOTS SIMULATOR
Webots is a development environment used to model, program and simulate mobile robots. With Webots the user can design complex robotic setups, with one or several, similar or different robots, in a shared environment. The properties of each object, such as shape, color, texture, mass, friction, etc., are chosen by the user. A large choice of simulated sensors and actuators is available to equip each robot. The robot controllers can be programmed with the built-in IDE or with third party development environments. The robot behavior can be tested in physically realistic worlds. The controller programs can optionally be transferred to commercially available real robots. Webots is used by over many universities and research centers worldwide. The development time you save is enormous.
Figure-1: Webots development stages
Webots allows you to perform 4 basic stages in the development of a robotic project Model, Program, Simulate and transfer as depicted on the Fig. 1.
Tags:
Artificial Intelligence,
Autonomous Robots,
C Programlama,
Computer Vision,
Programlama,
Proje,
Robot Navigation,
State Machine,
Yapay Zeka
Tuesday, May 26, 2015
Understanding k-Nearest Neighbour
Goal
In this chapter, we will understand the concepts of k-Nearest Neighbour (kNN) algorithm.
Theory
kNN is one of the simplest of classification algorithms available for supervised learning. The idea is to search for closest match of the test data in feature space. We will look into it with below image.
In the image, there are two families, Blue Squares and Red Triangles. We call each family as Class. Their houses are shown in their town map which we call feature space. (You can consider a feature space as a space where all datas are projected. For example, consider a 2D coordinate space. Each data has two features, x and y coordinates. You can represent this data in your 2D coordinate space, right? Now imagine if there are three features, you need 3D space. Now consider N features, where you need N-dimensional space, right? This N-dimensional space is its feature space. In our image, you can consider it as a 2D case with two features).
Now a new member comes into the town and creates a new home, which is shown as green circle. He should be added to one of these Blue/Red families. We call that process, Classification. What we do? Since we are dealing with kNN, let us apply this algorithm.
Tags:
Algorithm,
Computer Vision,
Course,
Machine Learning,
Matematik
Tuesday, May 12, 2015
Robot Navigation - Rapidly-Exploring Random Tree Algorithm
Objective
The aim of this post is to understand the rapidly-exploring random tree and implement it in Matlab.
1 - Introduction
A rapidly exploring random tree (RRT) is an algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. It widely used in autonomous robotic path planning.
2 - The Algorithm
RRTs were proposed as both a sampling algorithm and a data structure designed to allow fast searches in high-dimensional spaces in motion planning. RRTs are progressively built towards unexplored regions of the space from an initial configuration as shown in Figure 1.
At every step a random $q\_rand$ configuration is chosen and for that configuration the nearest configuration already belonging to the tree $q\_near$ is found. For this a definition of distance is required (in motion planning, the euclidean distance is usually chosen as the distance measure). When the nearest configuration is found, a local planner tries to join $q\_near$ with qrand with a limit distance . If $q\_rand$ was reached, it is added to the tree and connected with an edge to $q\_near$. If $q\_rand$ was not reached, then the configuration $q\_new$ obtained at the end of the local search is added to the tree in the same way as long as there was no collision with an obstacle during the search. This operation is called the Extend step, illustrated in Figure 2. This process is repeated until some criteria is met, like a limit on the size of the tree.
The aim of this post is to understand the rapidly-exploring random tree and implement it in Matlab.
1 - Introduction
A rapidly exploring random tree (RRT) is an algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. It widely used in autonomous robotic path planning.
2 - The Algorithm
RRTs were proposed as both a sampling algorithm and a data structure designed to allow fast searches in high-dimensional spaces in motion planning. RRTs are progressively built towards unexplored regions of the space from an initial configuration as shown in Figure 1.
Progressive construction of an RRT.
At every step a random $q\_rand$ configuration is chosen and for that configuration the nearest configuration already belonging to the tree $q\_near$ is found. For this a definition of distance is required (in motion planning, the euclidean distance is usually chosen as the distance measure). When the nearest configuration is found, a local planner tries to join $q\_near$ with qrand with a limit distance . If $q\_rand$ was reached, it is added to the tree and connected with an edge to $q\_near$. If $q\_rand$ was not reached, then the configuration $q\_new$ obtained at the end of the local search is added to the tree in the same way as long as there was no collision with an obstacle during the search. This operation is called the Extend step, illustrated in Figure 2. This process is repeated until some criteria is met, like a limit on the size of the tree.
Friday, May 1, 2015
Robot Navigation - A Star Algorithm
Objective
The aim of this lab is to understand the A* algorithm and implement it in Matlab.
1 - Introduction
In computer science, A* is a computer algorithm that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. A* achieves better time performance by using heuristics.
2 - The Algorithm
A* uses a best-first search and finds a least-cost path from a given initial node to the goal node. As A* traverses the graph, it follows a path of the lowest expected total cost or distance, keeping a sorted priority queue of alternate path segments along the way.
It uses a knowledge-plus-heuristic cost function of node x to determine the order in which the search visits nodes in the tree. The cost function is a sum of two functions:
If the heuristic h satisfies the additional condition h(x) = d(x,y) + h(y) for every edge (x, y) of the graph (where d denotes the length of that edge), then h is called consistent. In such a case, A* can be implemented more efficiently. No node needs to be processed more than once and. Now let's look at closely to the each steps.
The aim of this lab is to understand the A* algorithm and implement it in Matlab.
1 - Introduction
In computer science, A* is a computer algorithm that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. A* achieves better time performance by using heuristics.
2 - The Algorithm
A* uses a best-first search and finds a least-cost path from a given initial node to the goal node. As A* traverses the graph, it follows a path of the lowest expected total cost or distance, keeping a sorted priority queue of alternate path segments along the way.
It uses a knowledge-plus-heuristic cost function of node x to determine the order in which the search visits nodes in the tree. The cost function is a sum of two functions:
- the past path-cost function, which is the known distance from the starting node to the current node x (denoted g(x))
- a future path-cost function, which is an admissible "heuristic estimate" of the distance from x to the goal (denoted h(x)).
If the heuristic h satisfies the additional condition h(x) = d(x,y) + h(y) for every edge (x, y) of the graph (where d denotes the length of that edge), then h is called consistent. In such a case, A* can be implemented more efficiently. No node needs to be processed more than once and. Now let's look at closely to the each steps.
Thursday, April 30, 2015
Robot Navigation - The Rotational Sweep Algorithm
Objective
The aim of this post is to understand the rotational plane sweep algorithm to build a visibility graph and implement it in Matlab.
1 - Introduction
The rotational plane sweep is a path planning algorithm based on topological maps. It is one of the most powerful method of intelligent robot navigation. The basic concepts and details of the algorithm are going to be explained in the next chapter. After that, we are going to see the results.
2 - The Algorithm
Topological map is a simplified map with only relationship between points. It can be represented as a graph:
In order to create visibility graph. We should define for a 2D polygonal configuration space
The aim of this post is to understand the rotational plane sweep algorithm to build a visibility graph and implement it in Matlab.
1 - Introduction
The rotational plane sweep is a path planning algorithm based on topological maps. It is one of the most powerful method of intelligent robot navigation. The basic concepts and details of the algorithm are going to be explained in the next chapter. After that, we are going to see the results.
2 - The Algorithm
Topological map is a simplified map with only relationship between points. It can be represented as a graph:
- nodes are real positions
- edges join positions in the free space.
In order to create visibility graph. We should define for a 2D polygonal configuration space
- The nodes $v_{i}$ of the visibility graph include the start location, the goal location, and all the vertices of the configuration space obstacles.
- The graph edges $e_{ij}$ are straight-line segments that connect two line-of-sight nodes $v_{i}$ and $v_{j}$, i.e.,
Saturday, March 28, 2015
Robot Navigation - The Wavefront Planner Algorithm
Hi, reader this report was written for the 'Autonomous Robots' labwork. It explains 'The Wavefront Planner Algorithm'. End of this post, you can see the Matlab codes and also the report itself.
1 - Introduction
The theories behind robot maze navigation is immense. It would take several books just to cover the basics. But this labwork only concentrate on the wavefront planner algorithm which is still powerful methods of intelligent robot navigation. The basic concepts and details of the algorithm are going to be explained in the next chapter. After that, we are going to see the results.
2 - The Algorithm
The wavefront algorithm finds a path from point S (start) to point G (goal) through a discretized workspace such as this (0 designates a cell of free space, 1 designates a cell fully occupied by an obstacle):
\[
\begin{bmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 2 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\end{bmatrix}
\]
1 - Introduction
The theories behind robot maze navigation is immense. It would take several books just to cover the basics. But this labwork only concentrate on the wavefront planner algorithm which is still powerful methods of intelligent robot navigation. The basic concepts and details of the algorithm are going to be explained in the next chapter. After that, we are going to see the results.
2 - The Algorithm
The wavefront algorithm finds a path from point S (start) to point G (goal) through a discretized workspace such as this (0 designates a cell of free space, 1 designates a cell fully occupied by an obstacle):
\[
\begin{bmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 2 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\end{bmatrix}
\]
Wednesday, January 21, 2015
Face Recognition
Chapter 1: Introduction
1. Preliminary
In this project, we implemented a face recognition system by using principal component analysis, which is known as PCA. PCA method provide a mathematical way to reduce the dimension of problem.
Since the most elements of a facial image are highly correlated, it is better to extract a set of interesting and discriminative feature of a facial image. Mathematically speaking, we transform the correlated data to independent data. To implement the transform, we employed some linear algebra method such as SVD (Chapter3). The main idea is to obtain Eigenfaces that every face can be regard as a linear combination of these eigenfaces (Chapter4). Then the face recognition problem convert to a mathematic problem: what is the linear combination of a face? In other words, it simplify a problem from 2D to 1D.
1. Preliminary
In this project, we implemented a face recognition system by using principal component analysis, which is known as PCA. PCA method provide a mathematical way to reduce the dimension of problem.
Since the most elements of a facial image are highly correlated, it is better to extract a set of interesting and discriminative feature of a facial image. Mathematically speaking, we transform the correlated data to independent data. To implement the transform, we employed some linear algebra method such as SVD (Chapter3). The main idea is to obtain Eigenfaces that every face can be regard as a linear combination of these eigenfaces (Chapter4). Then the face recognition problem convert to a mathematic problem: what is the linear combination of a face? In other words, it simplify a problem from 2D to 1D.
Tags:
Lineer Cebir,
Matematik,
MATLAB,
PCA,
Programlama,
Proje
Friday, January 16, 2015
Introduction to Spectral Mesh Analysis Toward a simple implementation in C++
Hi reader, as I said before I want to share what I have done and learned from the Vision & Robotics master program. This post includes the our semester project. Actually, the code was given to all the students and the professor wanted us to improve his code and apply 'Spectral Mesh Analysis' on it. You can find more information from the project link that is end of the post. If you have any question free to shoot. I'm just putting here the graphical user interface and how to use the program that is already inside of the project report.
6.1 Framework choice
Because of the constraints that the project must be developed using C++ under Qt IDE, we used the Qt Widgets that are mature and feature rich user interface elements suitable for mostly static user interfaces. Besides, since Qt Widget are native C++ elements it is easier to merge UI with the application logic. The application UI is connected to all other parts of the application through the class Logic. The logic handles all the data interchange between the UI and algorithms. So it is possible to split the application in separate parts.
6.2 Basic elements of UI
The UI is straightforward and easy to use. There are two main parts – openGL screen and sidebar. It is possible to extract many cases of the application usage from the task:
6.1 Framework choice
Because of the constraints that the project must be developed using C++ under Qt IDE, we used the Qt Widgets that are mature and feature rich user interface elements suitable for mostly static user interfaces. Besides, since Qt Widget are native C++ elements it is easier to merge UI with the application logic. The application UI is connected to all other parts of the application through the class Logic. The logic handles all the data interchange between the UI and algorithms. So it is possible to split the application in separate parts.
6.2 Basic elements of UI
The UI is straightforward and easy to use. There are two main parts – openGL screen and sidebar. It is possible to extract many cases of the application usage from the task:
- Load of the file
- Adjust the camera properties
- Adjust the light properties
- Adjust the displaying mode
- Calculate Laplacian Matrix and set new colors according to it
- Find the shortest path from one node to another
Tags:
C++,
Lineer Cebir,
OpenGL,
PCA,
Programlama,
Proje,
Qt
Subscribe to:
Posts (Atom)