Monday, June 2, 2008

eBay Questions

Please post one eBay question here.

Saturday, May 31, 2008

Wednesday, May 28, 2008

Open Source

Bryce LiaBraaten
-------
Open source is a software development methodology. Open source software is software in which the user has much more control that traditional systems. Users of open source get access to the source code. This allows them to tailor it to their needs. Open source software is free, meaning no licensing fees, which makes it ideal for many businesses. While open source is free there are other costs that are associated with it. First you will need to have training for employees so they can utilize the resource. Also open source software, like Linux, often runs better then it’s counterparts, Microsoft Windows. Open source allows you to tailor exactly what you want do so it wastes less computer resources. While there are many benefits to open source there are some down sides. There is no support for your projects on open source. So you can’t just call the help line. This makes Open source not the ideal OS for critical functions but over time open source software is becoming much more reliable.

Open Source is one of the great ideas that have been helped immensely by the flatteners. Now with people all over the world developing Open source is coming into its own. Open source a great solution for many business. By using these new systems computing can be more efficient, because there will be less waste of resources. Open source also opens itself to a lot expansion and research by giving everyone the source code. I think this is a great idea because programmers all over the world are able to improve on open source software. This means in the future Open source may be the standard.

Darcy Cronwell
-----
Open Source is a source code to run or modify freely where you can alter to how you see fit with many different design possibilities, not like the standard proprietary software. Many companies don’t believe that Open Source is effective, they see it a “flaky, cheap and the work of amateur developers.” However, the myths of Open Source are just an excuse for businesses that are lacking in the technology world.

Just because it’s cheap does not justify its quality. Just by moving to Linux, open source software, Employease’s Alberg has seen a dramatic change in service failure and making applications run faster doubling business. Cost saving shouldn’t be a factor; it should be what is the best technology. You may to pay for training, maintenance, and support, but it keeps you away from vendors and having to upgrade and pay more money. Open Source has zero marginal cost because Open Source does not require addition licenses as it grows.

OS has large corporations of developers meaning that there is no categorized “one go-to-guy” support when something goes wrong. However, multiple sources have been proven to be helpful because being worldwide we can get support, communicate with a developer or download a patch no matter the time of day. Thomas Jinneman, IT director of RightNow Technologies justifies that “we’ve had more trouble getting support for some of our purchased commercial applications than we’ve had with open-source applications." Even with mission-critical apps, Open Source has proven to be faster and more reliable services.

Companies with an external focus, used to working collaboratively with other organizations, already using collaborative technologies, will gain much more from open source than companies with an internal focus. Even a company like Microsoft has its drawbacks, security loopholes, license fees…etc.

Culture of the Amateur

Robert Clausen
--------
In his blog entry “The Amorality of Web 2.0” Nicholas Carr explains some of the things that are wrong with web 2.0. He says that Web 2.0 is supposed to be a great collective intelligence of the entire world but it is not turning out that way. He uses Wikipedia as his biggest example to show the lack of quality that the collective intelligence of the internet comes up with. He is saying that the internet is changing the economics of creative work and not in a good way. People are reading things that are free and done by amateurs instead of paying for quality works that professionals do and are checked by editors to help make sure they are done right. With bloggers, he claims that there is an emphasis on opinion over reporting.

I agree with a lot of what Nicholas Carr said. However, I don’t believe that the internet collective intelligence will be of poor quality for very long. Web 2.0 is still relatively young and I think that the blogosphere quality will end up improving. Even though newspaper companies are going through many layoffs, blogs provide a place for these writers to share their work. His blog was written in 2005 and I’m sure that the quality of the Wikipedia examples have already improved greatly since he wrote it. Web 2.0 may not be a perfectly reliable source for free information, but I can see it heading that way.

Meta Data

Stefan Cordova
------
Simonyi has been writing and developing code for Microsoft Office for over 20 years. Now he wants to reinvent the way code is written using what he calls intentional programming. Simonyi left Microsoft in 2002 to work on intentional programming. Microsoft had been making significant strides with the .NET framework and Simonyi’s intentional programming was not very practical, much more disruptive, and required a radical change from existing programming models. Programming started out with the basic binary code of 1s and 0s and since has made significant gains using abstractions to write the code. Each year code gets more advanced by using more and more abstractions. Simonyi’s wants to completely change this process by having end users be the programming developers instead of the programming developers themselves. Those people that are trained experts in the field understand those processes better and would be better able to come up with a system than the programmers writing the code.

Intentional software invokes two main lines of criticism. The first is how do you represent intent. The second is the law of leaky abstractions. Intentional programming can be understood by the simple metaphor of making a bench. The benchmaker arrives with odd looking parts and asks each people what parts they like. He would ask questions such as what is the most important feature, what’s the next most important feature, what materials do you like and so on. After each answer the bench maker would then upload the new bench onto a screen to see if it satisfied the users. If the image was not right they would backtrack and answer the questions differently. Once the users were satisfied the benchmaker pressed the make button and out came a bench that everyone was satisfied with. Substitute software for benchmaking and you will understand intentional programming. Now users are able to work with a familiar language to them and alter the appearance on the screen without having to master the underlying code. The computer transforms that image or process you created and write the code for you.

This article was very interesting although I could have gone without the long history of Simonyi’s life and instead just focused more on intentional programming. This technology is really amazing and I hope that Simonyi can actually develop it. Our systems could be so much better if the end users who actually use those programs could develop the processes. People who have watched this project closely say that Simonyi is on the right track and that gives me hope that this new technology will be developed sooner rather than later.

Sara Supple
------
Charles Simonyi is a man who has constantly set out to change and evolve the world of programming as we have known it. Born in Hungary, he started his work on a relic computer run on vacuum tubes. Arriving in the US to attend Berkeley, he joined forces with a professor working for a company that did eventually fail, but led to Simonyi working with the famous Xerox PARC labs, and later becoming one of the greatest programmers with Microsoft. Specifically, Simonyi has worked to simplify the coding procedures, ultimately allowing everyday users to code their own programs using what he calls intentional programming/software. The idea is that generic tools will allow users to interact with the coding behind their programs and make changes as they see fit. Instead of programmers handing off their coding work to ‘worker bees’ like his earlier thesis had asserted, the code would go into a generator, “a program that takes in a set of relatively high-level commands and spits out more-detailed working code” (Technology Review). With this effort, it was Simonyi’s hope that programmers could focus on the creative process of their work, rather then wasting brain space with the minute and unnecessary details that coding required.

Simonyi’s work has attracted some negative associations, however. Programmers don’t believe it is possible to capture computer users’ intentions, which would be like trying to understand every individual’s brains intentions. The second argument is that programmers don’t like to be distanced from their code- the programs usually have their own way of doing things, and their personal creativity suffers.

I really can understand the relevance of this article. Dealing with code right now in my MIS 271 class, I can completely understand Simonyi’s reasoning for wanting to make the coding process easier to the every-day user. However, I also understand the arguments against Simonyi’s efforts. While it would be a great invention, when something goes wrong with the program (and it will, eventually) someone is going to have to go into the program to figure out what happened- and that takes place in the code. I also feel that if Simonyi were a typical entrepreneur with limited cash flow and a deadline to meet, his intentional software would no longer be an option. While he is busy perfecting his software to meet the needs of companies he is working with now, software and computers will continue to evolve and leave him behind. Even though those who have researched his ideas understand the concepts and say he is on the right track, with no real deadline pushing him to get his product on the market I feel as if Simonyi’s work will remain today where it has been since its creation- not in the hands of consumers.

End of Corporate Computing

Bryan Adams
--------------
The industrial age brought forth the conceptualization of power production as a purchasable utility, rather than a function of corporate generation. Similarly, information technology is becoming less of an asset in the form of purchased software and physical material, than it is becoming a utility purchased from vendors. Very simply, IT is shifting away from being an asset companies own to a service they buy.

The problem is that there is a massive over investment in information technology. The amount of money that is being invested in data centers and individual components, which can be characterized by the personal computer, is inherently unnecessary. The problem is that the current standard for networking and data center design creates a waste of financial and managerial power. People in power are distracted by doing real business when time must be focused on maintaining current information technology systems. Additionally, current assets purchased from vendors are being outstandingly under utilized. This includes all levels of technology, from an employee’s desk computer which may utilize only a fraction of its capabilities, to companies expensive servers which may be used at only 10 to 35 percent capacity.

The answer to over scaled and ineffective data centers is centralized supply in the form of a utility. Large scale utility computing is rapidly becoming a viable reality. IBM, Hewlit-Packard, and Electronic Data Systems are leading the way in offering computing power and information technologies as a resource purchased by a vendor. Virtualization, grid computing, and web services are key. Virtualization allows for applications to be run ‘everywhere’ despite platform or OS differences. Grid computing allows a large number of hardware components to act as a single device. Web services are intended to standardize interfaces between applications. The entire purpose of this IT movement is to turn disjoined systems into unified large scale infrastructures. The future of corporate appears doomed as computing becomes utility provided by big companies who provide an outstanding competitive edge to firms who utilize their services.

I think that this is the perfect article for this class. If we intend to learn about relevant and emerging technologies than understanding corporate innovations is essential. Essentially, we can see that there is a trend in the information technology world and this trend may change the way individuals perceive computing. My first thought is on what scale utility is computing feasible. Obviously, on a large scale, firm wide scale it is possible, but what about in a home? If a home has 3 personal computers would a centralized network with only pertinent software and necessary computing capacity reduce cost or increase functionality on a small scale. I can see that the cost of personal computers is low and the cost of such integrated IT systems are high but the unpredictability of the future makes me question all of our current IT related standards.

On a more relevant note, I think it is important to recognize the competitive edge ‘ending cooperate computing’ may provide a firm. Thus, firms must now decide how and when to take advantage of new technological opportunity. Perhaps those who are first to utilize these utilities will gain immensely, or they might suffer from emerging technologes that bring hardship. The best part of this article is knowing a new production possibility frontier may exist due to this information technology transition.

--------------------
Craig Sugiyama
--------------------
The article End of Corporate Computing written by Nicolas Carr is a very important one to understand because it is a prediction on what he believes will happen with our information technologies.

He first starts his article off with a parallel of IT to the revolution of electrification. He talks about how companies had to produce their own power because of the steep prices they would have had to pay from small providers. Innovation and central distribution led to these small utility companies being able to provide current for cheaper and to a broader amount of people. Companies now could save money and focus their revenues on their production. Carr then goes to talk about the shift in IT is happening the way the shift in electricity happened. Companies are now going to focus more on where they can buy this computing service from rather than a personal asset they must maintain.

Carr believes that if IT can be more of a “centralized” utility service then it will have a huge impact on companies. He believes that the companies such as IBM and Google will be the winners. They will be the ones providing these information technologies. Once corporate computing is slowly taken over by these large companies the smaller ones will be able to focus on their actual business not their IT infrastructure. Within the next five years it is predicted that companies will be buying computing not computers.

I thought this article was very interesting in the fact that IT computing may change from something that is produced within a company to one that is purchased from a specialized provider. I think it is very important to understand the economic potential that this concept Carr talks about has. I believe that the end of corporate computing means the beginning of large scale centralized infrastructures who will provide these services needed to allow companies to focus more on their direct activities.

Wednesday, May 21, 2008

Wiki Assignment

Post URLs to wiki here with your full names.