Brookshear
Rating: 4
1) Are lossless systems always the best way to go?
If not, when should lossy systems take precedence? What conditions need to
exist in order for the lost information to be acceptable?
2) Perhaps I misunderstood something, but isn’t the
“commit point” the end of a process? Instead of saying a process has reached
the commit point why do we not simply say a process is complete?
3) What are the advantages and disadvantages to indexed
filing and hash systems? How can we determine which one works more efficiently
in a given situation? Is one always better than the other? Is on usually better than the other?
DT Larose
Rating: 4
1) Larose says that data mining cannot run itself
and there is a need for human supervision over any data mining project. Will
this always be the case? Computers are getting smarter all the time right? So
will they ever reach the point where they can perform data mining tasks on
their own?
2) One of the fallacies related to data mining put forth
in the reading is that data mining quickly pay for itself most, if not all, of
the time. Is there any way to predict when this will occur?
3) For case study #4 there was no real
deployment stage. Does there need to be deployment for a data mining project to
have value? Which of the other stages might be skipped over any how would that
affect the value of data mining?
Wayner
Rating: 3
1) How do you
balance the need for greater compression with the need for stability? On pg 20
of chapter two we read that variable length coding can really compress a file a
lot, but it is also very fragile. Maybe we can give up some of the space
compressed if we get a little more stability. But give up too much compression
and we have to ask ourselves if the compression is worth it in the first place.
What is the balance?
No comments:
Post a Comment