The things I did:
I implemented 16 meta-attributes and some more from STARLOG and METAL project.
I found some bugs in my code and took times to figure out.
I will report the preliminary result ASAP this week.
The problem I got:
I was busy with some odd works for last weekend. It ate lots of my time. so I couldn't follow my original research schedule. I learned that I have to save my reserach time with any costs.
My pre-coded meta-attributes functions are spread out through diverse projects.
and it took time to integrate all of them into one project.
The thing I plan to do:
Get some preliminary positive result ASAP.(no later than next report)
Tuesday, February 19, 2008
Monday, February 11, 2008
Report for 2/13
** The thing I did this week:
1. Additional result on 5 attributes data comparison
data set: 100 5-attribute random data.
Size of data instances: about 4950(=all possible pairs out of 100)
1.1
Training accruacy from ID3
Similarity accuracy: 91.0303%(C4.5),90.9293%(RandomCommittee),89.63%(SVM),89.5758%(MLP)
1.2
Training accuracy from MLP
Similarity accuracy: 89.4545(C4.5), 89.798%(RandomCommittee), 90.1212%(SVM),88.5859%(MLP), 89.9394%(Bagging)
2. I implemented deterministic Q-learning algorithm as a starting point for future research.
3. Reading
Kate A. Smith-miles(Cross-Disciplinary Perspectives On Meta-Learning For Algorithm Selection): Good paper to review diverse disciplines regarding to best algorithm selection for various problem domains.
Question: how our research goal is different from landmarking. According to her paper, landmarking is preidicting the peformance of one algorithm based upon the performance of cheaper and effective algorithm.
** Problem I confronted.
Generating random data with broad accuracies is hard.
When I generated 100 5-attribute data set, the accuracy range is only between 2.24% and 24.20 %
** Plan for next week.
1. experiment with more data set having different # of attribute, experiment with different algorithm.
2. Read 5 papers (Q-Decompsiiont paper in ICML 2003, Task decomposition in IEEE 1997, Recognizing Enviromental Change, IEEE 1999, Environmental adoption IEEE 1999) and do summary.
3. extend deterministic Q-learning into 'non-deterministic' Q-learning
1. Additional result on 5 attributes data comparison
data set: 100 5-attribute random data.
Size of data instances: about 4950(=all possible pairs out of 100)
1.1
Training accruacy from ID3
Similarity accuracy: 91.0303%(C4.5),90.9293%(RandomCommittee),89.63%(SVM),89.5758%(MLP)
1.2
Training accuracy from MLP
Similarity accuracy: 89.4545(C4.5), 89.798%(RandomCommittee), 90.1212%(SVM),88.5859%(MLP), 89.9394%(Bagging)
2. I implemented deterministic Q-learning algorithm as a starting point for future research.
3. Reading
Kate A. Smith-miles(Cross-Disciplinary Perspectives On Meta-Learning For Algorithm Selection): Good paper to review diverse disciplines regarding to best algorithm selection for various problem domains.
Question: how our research goal is different from landmarking. According to her paper, landmarking is preidicting the peformance of one algorithm based upon the performance of cheaper and effective algorithm.
** Problem I confronted.
Generating random data with broad accuracies is hard.
When I generated 100 5-attribute data set, the accuracy range is only between 2.24% and 24.20 %
** Plan for next week.
1. experiment with more data set having different # of attribute, experiment with different algorithm.
2. Read 5 papers (Q-Decompsiiont paper in ICML 2003, Task decomposition in IEEE 1997, Recognizing Enviromental Change, IEEE 1999, Environmental adoption IEEE 1999) and do summary.
3. extend deterministic Q-learning into 'non-deterministic' Q-learning
Subscribe to:
Posts (Atom)