tag:blogger.com,1999:blog-32620343593870081372024-03-13T20:50:02.343-07:00주은Unknownnoreply@blogger.comBlogger12125tag:blogger.com,1999:blog-3262034359387008137.post-18480992433605655892015-07-15T04:40:00.001-07:002015-07-15T04:40:29.014-07:00Diary 20152015-07-15<br />
<br />
I don't know what to write. My keyboard is hot so it is uncomfortable to write on this.<br />
I got heavy breakfast as per usual and it makes me feel dragged and lazy. I want to over come these feelings.<br />
<br />
what I am now<br />
1. negative attitude about life<br />
2. lazy and selfish<br />
3. very greedy, covetous<br />
4. people sensitive<br />
5. heavy eater<br />
6. not a<br />
<br />
do you know why before trying to change them.<br />
1. the world is competitive<br />
2. it is not easy to obtain what I need<br />
3.<br />
<br />
what I want to be<br />
1. positive person<br />
2. diligent<br />
3. self-generous<br />
4.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-21781074632993662902015-01-01T09:23:00.000-08:002015-06-04T07:04:20.239-07:002015년 목표<ul>
<li>육체적 습관적인 면</li>
</ul>
<ol><ol>
<li>주중 하루 한시간 아리랑, 삼십분 생활오픽</li>
<li>주중 하루 한시간 기술 볾</li>
<li>주중 두번 업드려 팔굽혀 펴기</li>
</ol>
</ol>
<div>
<br /></div>
<수정: 2015-05-18><br />
1. 주중: 아리랑(오전: 7:30-8:00), 오픽(밤: 8:30-9:00)<br />
2. 기술: 과목: 화,목(8:00-8:30)<br />
알고리즘: 월, 화, 수,목 (9시-10:30), 오픽 (10:30-11:30)<br />
<br />
수정: 2015-5-27<br />
1. 시간단위로 계획 세우지 않기. 쪽 단위로 <br />
<ul>
<li>정신적인 면</li>
</ul>
1.겸손해지기<br />
2. 욕심부리지 않기<br />
<br />
<div>
</div>
<ul>
<li>방밖에서</li>
</ul>
<ol><ol>
<li>7/10 에너지</li>
<li>1시간일 -> 2시간 </li>
</ol>
</ol>
<br />
5월:<br />
6월: 논문 완성, 1차 오픽시험, 1차 SW 자격시험<br />
7월: 2차 오픽시험<br />
8월: 3차 오픽시험, 2차 SW 자격시험<br />
9월: 4차 오픽시험<br />
10월:3차 SW 자격시험<br />
11월:<br />
12월:<br />
<br />
5월 마지막 주<br />
Algo 2: p854-863 (5/27 wed) pass/fail -> pass<br />
p881-886 (5/28 thurs) pass/fail -> pass<br />
Opic: 4week 10,11,12 (5/27 wed) pass/fail -> pass<br />
4week 8,9,10 (5/28 thurs) pass/fail -> pass<br />
<br />
6월 계획<br />
1째 주<br />
Algo 2: p898 - p912 (6/1 mon) -done<br />
p919 - p930 (6/2 tues) done<br />
p930 - p932 coding (6/3 wed) fail ( this day is moive day)<br />
p930- p932 coding (6/4 thurs) done<br />
OPIC: 4.4 ~ 4.6 (6/1 mon) -done<br />
4.1 ~ 4.3 (6/2 tues) - done<br />
3.18 ~ 3.20 (6/3 wed) - done on tuesday and no study (movie day)<br />
3.15 ~ 3.17 (6/4 thurs) - done<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-32449569162613832662014-11-25T04:59:00.005-08:002014-11-25T05:00:13.445-08:00Daily reflection 2 올해 계획 2014-11-25남은 올해 목표:<br />
<br />
<ol>
<li>컴퓨터 비전 한글책 6장</li>
<ul>
<li>비전에서 여러: object detection 방법들을 앎.</li>
<li>non maximum compression 방법 앎.</li>
</ul>
<li>BING 실시</li>
<li>음성인식의 전반적 흐름을 현재 pdf 통해 앎.</li>
<li>칼디를 통해 음성인식을 해볾.</li>
<li><span style="color: red;"><b>(옵션) 작문 1장, 2장 끝내기</b></span></li>
</ol>
<br />
<br />
11월 마지막 주 (화 쪼금, 수,목)<br />
화쪼금, 수 : 목표1:3장<br />
목: 목표3: 3.1 3.2 목표:4장<br />
<br />
<br />
12월 세개 주 (월 화 수 목)<br />
첫째주<br />
월 목표1: 4장, 목표2<br />
화 목표1: 4장, 목표2<br />
수 목표3: 3.3, 3.4 목표:4장<br />
목 목표3: 3.5 3.6 4.1 목표:4장<br />
세째주<br />
월 목표1: 5장<br />
화 목표1: 5장<br />
수 목표3: 4.2 4.3 5.1<br />
목 목표3: 5.2 5.3 5.4 5.5<br />
네째주<br />
월 목표1: 6장<br />
화 목표1: 6장<br />
수 목표3: 6.1 6.2<br />
목 목표3: 6.3 7.1 7.2<br />
<br />
<b><span style="color: red;"><br /></span></b>
<b><span style="color: red;">모자란 진도</span></b><br />
<b><span style="color: red;">목표1: 7장</span></b><br />
<b><span style="color: red;">목표1: 7장</span></b><br />
<b><span style="color: red;">목표3: 7.3 7.4</span></b><br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-10524562882900725572014-11-24T23:25:00.003-08:002014-11-25T05:00:41.695-08:00Daily reflection 여러메모 2014-11-25<div style="text-align: left;">
<b>오라스콤 그룹:</b></div>
<div style="text-align: left;">
이집트 회사
가족 회사
아들들이 계열사 사장
몇십조 매출을 올림
리스크를 즐김
북한에 투자.
유일하게 북한에 투자한 외국계 회사
북한의 값싼 노동력과 통일이후의 가능성을 보고 투자.
2000년대 중반 북한과 합작으로 북한에 통신회사 설립
북한에서 미혼여성 결혼 예물로 핸드폰이 유행함.
</div>
<br />
<b>국민감독 김인식</b><br />
300패에서 배운다.
80년대 중반 타이커드 코치로 시작
90년대 초 쌍방울 레이더스 감독으로 데뷔
몇몇 어려운 구단 감독을 맡아 좋은 성과를 몇번 냄.
국가 대표 감독도 맞아 월드 클래스에서 우승, 준우승을 함.
야구는 승리보다는 패베에서 많이 배움.
여러해 계속 잘 나갈때는 독단과 배려없음이 생김.
300패 정도는 해야 야구를 알수 있음.
뜨거운 마음보단 따듯한 마음으로 선수와 구단을 이끌어 가야 함.<br />
<br />
<b>전략일기</b><br />
오늘 무엇을 할지 고민보다는 목표를 먼저 설정할것<br />
<br />
<b> 메타물질</b><br />
음성 소리를 흡수
스텔스 잠수함 가능
메타물질을 잠수함으로 둘러싸서 소리를 흡수해 가능
70-80년대 개념 시작
많은 application이 가능
아직은 좀 이른 단계<br />
<br />
<b>독일 실리콘 알레</b><br />
베를린의 여러 스타트업들이 모여있는 곳
독일은 기존의 제조업 강국 + IT기술을 접목
2012년한해에만 약 9천여개 벤처들이 설립
미국, 런던에 이어 독일의 실리콘 벨리
실리콘 벨리에 비해 아직 걸음마 단계
과감한 투자가 안 이루어지고 실패한 두려움이 아직 많음.<br />
<br />
<b> 일본 노벨상을 바라보며</b><br />
시사점
실효성은 없더라도 조금만 연구하면 실적이 나오는 연구 vs 실효성은 큰데 연구가 어려운 연구<br />
보통 첫번째 대부분 사람들이 매달림. 하지만 2번째에 큰 관심을 가짐.<br />
3명 모두 도쿄대학, 일류대학 출신이 아닌 지방대학 출신<br />
비인기 분야라도 장인정신처럼 매달리는 사람이 많다.<br />
많은 연구비를 받지 못했다.
산학연구가 많다.<br />
결국 정말 자기가 좋아하는 분야를 하는것이 중요하다.<br />
<br />
<b>HMM 음성인식</b><br />
아직 잘 어떻게 돌아가는건지 모르겠음.<br />
페이퍼가 나름 써베이 논문같은데 이것 조차도 쉽지 않음.<br />
하지만 이걸 극복해야 할거 같음.<br />
decision tree clustering문제와 연결<br />
N-gram language model (3-gram)= p(w) = pi P(wi| wi-1, wi-2)<br />
<br />
<b>그외</b><br />
임유선 책임이 분산컴퓨팅에 대해 공부를 해야할거 같다고 얘기.<br />
이건 내년 SRA-SV와의 협업에 대해 준비를 하는 것임.<br />
임책임은 똑똑하고 반면 착함.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-85307892922609423022008-09-25T13:12:00.000-07:002008-09-25T13:20:25.229-07:00(paper for fun) An Introduction to quantum computingTitle: An Introduction to Quantum Computing.<br />Author: Norson S. Yanofsky<br /><br />It introduces a taste of quantum computing targeting for computer science undergraduate and even advanced high school student.<br /><br />Hilbert space: regular vector space except each axis is a complex number.<br /><br />One of key points in Quantum Computing.<br />: Quantum can be existed in SEVERAL states AT THE SAME TIME.<br />: when quantum is measured, it is aggregated either 0 or 1. (in case of 2 bit quantum computer)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-32718399305608733442008-09-25T13:04:00.000-07:002008-09-25T13:09:36.292-07:00(Paper)Information-Theoretic definition of SimilarityTitle: Information-Theoretic Definition of Similarity.<br />Conference: ICML 1998<br /><br />Paper provides a general similarity measure applicable across many domains.<br />Previous similarity measure is specific for each domain.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-89408634639158383142008-03-04T15:15:00.000-08:002008-03-04T15:49:14.735-08:00Report for 3/4What I did:<br />Collected over 60 UCI data set.<br />Currently, 105 UCI data set is available. However, about 70 data sets are applicable to classcification.<br /><br />Very fortunately, I think, that I've found some interesting patterns for 3 or 4 meta features. <br /><br />The problem I got:<br />For some data set, MLP ate too much time. Even I used three computers, I haven't got MLP results yet for 6-7 data sets.<br /><br />What I will do next time.<br />There are many things to be done.<br />1. Cluster data set based on the vector of accuracies.<br />2. Examine meta features per clusterUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-55286904236829619872008-02-19T19:44:00.000-08:002008-02-19T19:52:55.606-08:00Report for 2/19The things I did:<br />I implemented 16 meta-attributes and some more from STARLOG and METAL project.<br />I found some bugs in my code and took times to figure out.<br />I will report the preliminary result ASAP this week.<br /><br />The problem I got:<br />I was busy with some odd works for last weekend. It ate lots of my time. so I couldn't follow my original research schedule. I learned that I have to save my reserach time with any costs.<br /><br />My pre-coded meta-attributes functions are spread out through diverse projects.<br />and it took time to integrate all of them into one project.<br /><br />The thing I plan to do:<br />Get some preliminary positive result ASAP.(no later than next report)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-90028344195315998172008-02-11T12:27:00.000-08:002008-02-12T14:38:52.625-08:00Report for 2/13** The thing I did this week:<br /><br />1. Additional result on 5 attributes data comparison<br />data set: 100 5-attribute random data.<br />Size of data instances: about 4950(=all possible pairs out of 100)<br />1.1 <br />Training accruacy from ID3<br />Similarity accuracy: 91.0303%(C4.5),90.9293%(RandomCommittee),89.63%(SVM),89.5758%(MLP)<br /><br />1.2<br />Training accuracy from MLP<br />Similarity accuracy: 89.4545(C4.5), 89.798%(RandomCommittee), 90.1212%(SVM),88.5859%(MLP), 89.9394%(Bagging)<br /><br />2. I implemented deterministic Q-learning algorithm as a starting point for future research.<br /><br />3. Reading<br />Kate A. Smith-miles(Cross-Disciplinary Perspectives On Meta-Learning For Algorithm Selection): Good paper to review diverse disciplines regarding to best algorithm selection for various problem domains.<br />Question: how our research goal is different from landmarking. According to her paper, landmarking is preidicting the peformance of one algorithm based upon the performance of cheaper and effective algorithm.<br /><br />** Problem I confronted.<br />Generating random data with broad accuracies is hard.<br />When I generated 100 5-attribute data set, the accuracy range is only between 2.24% and 24.20 %<br /><br />** Plan for next week.<br /><br />1. experiment with more data set having different # of attribute, experiment with different algorithm.<br /><br />2. Read 5 papers (Q-Decompsiiont paper in ICML 2003, Task decomposition in IEEE 1997, Recognizing Enviromental Change, IEEE 1999, Environmental adoption IEEE 1999) and do summary.<br /><br />3. extend deterministic Q-learning into 'non-deterministic' Q-learningUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-45381709183031740022008-01-29T21:34:00.000-08:002008-01-29T21:56:54.934-08:00Report for 1/30<p>1. What you have done<br />Steve implemented random-arff-file-generator for me.<br /><br />With that code, I generated twenty 4-attribute arff files.<br />I genearted one file consisting of 180 data instances where each instance contains entropy,infomation gain,kai-square, and difference(DT accuracy) in a pair arff files.<br /><br />I obtained MLP 64.7368%, SVM 42.1053 %, DT 57.3684%<br /><br /><p>2. What problems you have encountered<br />For data similairty, I don't have problems. Just lack of accuracy.<br /><br /><p>3. Next week plan.<br />During this week, I plan to implement standard Q-learning as the start and refine the idea.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-72698863650911116222008-01-16T13:47:00.000-08:002008-01-23T10:57:23.256-08:00Conference Information<a href="http://axon.cs.byu.edu/conferences.php">Click this</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3262034359387008137.post-89094321367251003192008-01-16T13:26:00.000-08:002008-01-22T09:54:18.052-08:00Report for 1/23<p>1. What you have done<br />I examined the tree depth for each data set and calculate the correlation.<br />It turned out that adult+stretch and soybean data have same height of the tree. It was good. HOWEVER, sometimes other pair of data set that shows huge difference in accuracy also has very similar height.<br />2. I ran our old sort-data -and -area-comparison method and the correlation of adult+stretch and soybean data were around 0.45<br />3. I rerun entropy-based-comparison method with different comparison methods. It hasn't showed the good result yet.<br />2. What problems you have encountered<br />1. My problem is ignorance of which property of data would be compared to find some correlation to a pair of data. For example, as we talked, adult+stretch and soybean data produces very similar accuracy even though so many differences exist between them. 3. What possible solutions you are considering for these problems<br />The entropy-based-comparison method produced about 0.6 correlations. That means entropy-based can somehow reflect the accuracy. But the problem is that it is not enough and not robust across any pair of data. So I think we may need to push this method a little more.<br />We group data by some property and run entropy- based-comparison method again.4. What you plan on doing in the coming week<br />I will spend one or two days to think about above idea (how to cluster data sets) and run entropy-based-comparison method with diverse vector comparison methods.5. New ideas, specific topics/issues you wish or us to focus on in our discussion<br />Well, I spent almost a year to think about data comparison and result hasn't very satisfactory yet. But I don't want to drop this since you and I put so much effort on this. However I may want to continue this work as a side work but practically I do the same thing as I do.<br /><br />I have no specific or solid idea of what to do for the next project. My other mind is toward Reinforcement learning area.<br />Here is two very abstract idea of agent learning.<br />1. The agent reaches the goal by direction of the goal instead of checking the reward every state.<br />or<br />2. Agent learns two obstacles in which we assume that each obstacle consists of two subcomponents in a source task. (Say obstacle 1 = A + B, obstacle 2= C+ D)<br />In a typical learning, learner (or agent) can indentify other obstacles similar with obstacle1 or obstacle 2. What happen if new obstacle is consists of A +C or B +D. In other words, new unseen obstacle consists of components derived from each learned obstacle. Here is the scenario.<br />Frog tries to reach home from a remote place by crossing some obstacles.<br />One obstacle is rat (rat is represented by moving motion and its stench)<br />The other obstacle is snake (snake is identified by its temperature and its nasty sound).<br />After learning, now frog is very good at avoiding poison pond and snake.<br />One day, frog encounter new object which is moving but produce similar nasty sound (say, raven.).<br />I want frog to decompose his obstacle experience and learn new obstacles if they are consists of components from learned obstacles.<br />I don't know this is possbile. It's just an idea.<br /></p>Unknownnoreply@blogger.com0