A program can seem like a large sheet of text. Changing a little text can cause the meaning of the whole document to change, so people make those changes carefully to avoid mistakes.
Superficially, that is all true, but what about modularity? We are often told it is better to write programs that are made of small reusable pieces, but how often are small pieces reused independently? Not very often. Reuse is tough. Even when pieces of software look independent, they often depend upon each other in subtle ways.
A seam is a place where you can alter behavior in your program without editing in
Michael, C. (2008). Feathers. Working Effectively with Legacy Code.
What is data mining.
In data mining, the data is stored electronically and the search is automated— or at least augmented—by computer.
Data mining is defined as the process of discovering patterns in data. The process must be automatic or (more usually) semiautomatic. The patterns discovered must be meaningful in that they lead to some advantage, usually an economic advantage. The data is invariably present in substantial quantities.
How are the patterns expressed? Useful patterns allow us to make nontrivial predictions on new data. There are two extremes for the expression of a pattern:
- as a black box whose innards are effectively incomprehensible and as a
- transparent box whose construction reveals the structure of the pattern.
Such patterns we call structural because they capture the decision structure in an explicit way
Things learn when they change their behavior in a way that makes them perform better in the future.
Market basket analysis is the use of association techniques to find groups of items that tend to occur together in transactions, typically supermarket checkout data.
What’s the difference between machine learning and statistics? Cynics, looking wryly at the explosion of commercial interest (and hype) in this area, equate data mining to statistics plus marketing. In truth, you should not look for a dividing line between machine learning and statistics because there is a continuum—and a multidimensional one at that—of data analysis techniques. Some derive from the skills taught in standard statistics courses, and others are more closely associated with the kind of machine learning that has arisen out of computer science. Historically, the two sides have had rather different traditions. If forced to point to a single difference of emphasis, it might be that statistics has been more concerned with testing hypotheses, whereas machine learning has been more concerned with formulating the process of generalization as a search through possible hypotheses. But this is a gross oversimplification: statistics is far more than hypothesis testing, and many machine learning techniques do not involve any searching at all.
Ian H. Witten, Eibe Frank. (1999). Data mining practical machine learning tools and techniques. Elsevier.
As the semester starts a new challenge has appeared, I was assigned with the task of creating a software capable of filtering information in the abstracts of research papers for the purpose of classifying them and creating a network of people that are working more or less in topics in the same area, as it seems that the universities lack founding for every single researcher since research investigation have grown lots in the last couple of decades, a software capable of aggregating professionals with the same interests could potentially reduce research costs. In the upcoming posts I’ll summaries my research toward my machine learning studies, findings and understandings.
In todays class we talked about creative commons and reserved rights. The system responsible for the first moon landing is now readily available online, after an enterprising former NASA intern uploaded the Apollo Guidance Computer code to Github this week.
When programmers at the MIT Instrumentation Laboratory set out to develop the flight software for the Apollo 11 space program in the mid-1960s, the necessary technology did not exist. They had to invent it.
They came up with a new way to store computer programs, called “rope memory,” and created a special version of the assembly programming language. Assembly itself is obscure to many of today’s programmers
Chapter 11 of software project survival guide. The fianl preparations period build on and extends the preliminary planning that was performed before the requirements development and architecture pass. At final preparations time, the project team is ready to create its forst estimate develop a plan to deliver its most important fuctionality and refine its other plans.
As soon as requirements have been baselined, the project team can create meaningful estimates for effort, cost, and schedule. Keep these rules of thumb about software estimation in mind.
- Its possible to estimate software projects accurately.
- Accurate estimates take time.
- Accurate estimates requieree a qualitative apporach, preferable one supported by a software estimate tool.
- The most accurate estimate are based on data from projects completed by the organization doing the current project.
- Estimates requiere refinement as the project progresses. Proyect estimates.
Estimate procedure guidelines.
The estimation procedure should by written. Estimates should be created by an expert estimator or by the most expert development, quality assurance, and documentation staff available. Estimates should include time for all normal activities.
The project plan should not assume the team will will work overtime. If the project plan assumes that the team will work overtime the project won’t have any reserves to draw from. This is relevant to that project manager that came to visit us 2 weaks ago were he sead that programmers tend to be more cualitaty and that a programmer need to know can be achieved, if the programmer feels that the project is imposible to achieve then he will stop working properly.
Estimates should be based on data from completed projects. Or in case that you are starting in the business ask other people how much they used as a butget and start from there.
Hunt, Andrew, and David Thomas. “The Pragmatic Programmer.” Addison Wesley 15