Saturday, May 31, 2008

Wednesday, May 28, 2008

Open Source

Bryce LiaBraaten
-------
Open source is a software development methodology. Open source software is software in which the user has much more control that traditional systems. Users of open source get access to the source code. This allows them to tailor it to their needs. Open source software is free, meaning no licensing fees, which makes it ideal for many businesses. While open source is free there are other costs that are associated with it. First you will need to have training for employees so they can utilize the resource. Also open source software, like Linux, often runs better then it’s counterparts, Microsoft Windows. Open source allows you to tailor exactly what you want do so it wastes less computer resources. While there are many benefits to open source there are some down sides. There is no support for your projects on open source. So you can’t just call the help line. This makes Open source not the ideal OS for critical functions but over time open source software is becoming much more reliable.

Open Source is one of the great ideas that have been helped immensely by the flatteners. Now with people all over the world developing Open source is coming into its own. Open source a great solution for many business. By using these new systems computing can be more efficient, because there will be less waste of resources. Open source also opens itself to a lot expansion and research by giving everyone the source code. I think this is a great idea because programmers all over the world are able to improve on open source software. This means in the future Open source may be the standard.

Darcy Cronwell
-----
Open Source is a source code to run or modify freely where you can alter to how you see fit with many different design possibilities, not like the standard proprietary software. Many companies don’t believe that Open Source is effective, they see it a “flaky, cheap and the work of amateur developers.” However, the myths of Open Source are just an excuse for businesses that are lacking in the technology world.

Just because it’s cheap does not justify its quality. Just by moving to Linux, open source software, Employease’s Alberg has seen a dramatic change in service failure and making applications run faster doubling business. Cost saving shouldn’t be a factor; it should be what is the best technology. You may to pay for training, maintenance, and support, but it keeps you away from vendors and having to upgrade and pay more money. Open Source has zero marginal cost because Open Source does not require addition licenses as it grows.

OS has large corporations of developers meaning that there is no categorized “one go-to-guy” support when something goes wrong. However, multiple sources have been proven to be helpful because being worldwide we can get support, communicate with a developer or download a patch no matter the time of day. Thomas Jinneman, IT director of RightNow Technologies justifies that “we’ve had more trouble getting support for some of our purchased commercial applications than we’ve had with open-source applications." Even with mission-critical apps, Open Source has proven to be faster and more reliable services.

Companies with an external focus, used to working collaboratively with other organizations, already using collaborative technologies, will gain much more from open source than companies with an internal focus. Even a company like Microsoft has its drawbacks, security loopholes, license fees…etc.

Culture of the Amateur

Robert Clausen
--------
In his blog entry “The Amorality of Web 2.0” Nicholas Carr explains some of the things that are wrong with web 2.0. He says that Web 2.0 is supposed to be a great collective intelligence of the entire world but it is not turning out that way. He uses Wikipedia as his biggest example to show the lack of quality that the collective intelligence of the internet comes up with. He is saying that the internet is changing the economics of creative work and not in a good way. People are reading things that are free and done by amateurs instead of paying for quality works that professionals do and are checked by editors to help make sure they are done right. With bloggers, he claims that there is an emphasis on opinion over reporting.

I agree with a lot of what Nicholas Carr said. However, I don’t believe that the internet collective intelligence will be of poor quality for very long. Web 2.0 is still relatively young and I think that the blogosphere quality will end up improving. Even though newspaper companies are going through many layoffs, blogs provide a place for these writers to share their work. His blog was written in 2005 and I’m sure that the quality of the Wikipedia examples have already improved greatly since he wrote it. Web 2.0 may not be a perfectly reliable source for free information, but I can see it heading that way.

Meta Data

Stefan Cordova
------
Simonyi has been writing and developing code for Microsoft Office for over 20 years. Now he wants to reinvent the way code is written using what he calls intentional programming. Simonyi left Microsoft in 2002 to work on intentional programming. Microsoft had been making significant strides with the .NET framework and Simonyi’s intentional programming was not very practical, much more disruptive, and required a radical change from existing programming models. Programming started out with the basic binary code of 1s and 0s and since has made significant gains using abstractions to write the code. Each year code gets more advanced by using more and more abstractions. Simonyi’s wants to completely change this process by having end users be the programming developers instead of the programming developers themselves. Those people that are trained experts in the field understand those processes better and would be better able to come up with a system than the programmers writing the code.

Intentional software invokes two main lines of criticism. The first is how do you represent intent. The second is the law of leaky abstractions. Intentional programming can be understood by the simple metaphor of making a bench. The benchmaker arrives with odd looking parts and asks each people what parts they like. He would ask questions such as what is the most important feature, what’s the next most important feature, what materials do you like and so on. After each answer the bench maker would then upload the new bench onto a screen to see if it satisfied the users. If the image was not right they would backtrack and answer the questions differently. Once the users were satisfied the benchmaker pressed the make button and out came a bench that everyone was satisfied with. Substitute software for benchmaking and you will understand intentional programming. Now users are able to work with a familiar language to them and alter the appearance on the screen without having to master the underlying code. The computer transforms that image or process you created and write the code for you.

This article was very interesting although I could have gone without the long history of Simonyi’s life and instead just focused more on intentional programming. This technology is really amazing and I hope that Simonyi can actually develop it. Our systems could be so much better if the end users who actually use those programs could develop the processes. People who have watched this project closely say that Simonyi is on the right track and that gives me hope that this new technology will be developed sooner rather than later.

Sara Supple
------
Charles Simonyi is a man who has constantly set out to change and evolve the world of programming as we have known it. Born in Hungary, he started his work on a relic computer run on vacuum tubes. Arriving in the US to attend Berkeley, he joined forces with a professor working for a company that did eventually fail, but led to Simonyi working with the famous Xerox PARC labs, and later becoming one of the greatest programmers with Microsoft. Specifically, Simonyi has worked to simplify the coding procedures, ultimately allowing everyday users to code their own programs using what he calls intentional programming/software. The idea is that generic tools will allow users to interact with the coding behind their programs and make changes as they see fit. Instead of programmers handing off their coding work to ‘worker bees’ like his earlier thesis had asserted, the code would go into a generator, “a program that takes in a set of relatively high-level commands and spits out more-detailed working code” (Technology Review). With this effort, it was Simonyi’s hope that programmers could focus on the creative process of their work, rather then wasting brain space with the minute and unnecessary details that coding required.

Simonyi’s work has attracted some negative associations, however. Programmers don’t believe it is possible to capture computer users’ intentions, which would be like trying to understand every individual’s brains intentions. The second argument is that programmers don’t like to be distanced from their code- the programs usually have their own way of doing things, and their personal creativity suffers.

I really can understand the relevance of this article. Dealing with code right now in my MIS 271 class, I can completely understand Simonyi’s reasoning for wanting to make the coding process easier to the every-day user. However, I also understand the arguments against Simonyi’s efforts. While it would be a great invention, when something goes wrong with the program (and it will, eventually) someone is going to have to go into the program to figure out what happened- and that takes place in the code. I also feel that if Simonyi were a typical entrepreneur with limited cash flow and a deadline to meet, his intentional software would no longer be an option. While he is busy perfecting his software to meet the needs of companies he is working with now, software and computers will continue to evolve and leave him behind. Even though those who have researched his ideas understand the concepts and say he is on the right track, with no real deadline pushing him to get his product on the market I feel as if Simonyi’s work will remain today where it has been since its creation- not in the hands of consumers.

End of Corporate Computing

Bryan Adams
--------------
The industrial age brought forth the conceptualization of power production as a purchasable utility, rather than a function of corporate generation. Similarly, information technology is becoming less of an asset in the form of purchased software and physical material, than it is becoming a utility purchased from vendors. Very simply, IT is shifting away from being an asset companies own to a service they buy.

The problem is that there is a massive over investment in information technology. The amount of money that is being invested in data centers and individual components, which can be characterized by the personal computer, is inherently unnecessary. The problem is that the current standard for networking and data center design creates a waste of financial and managerial power. People in power are distracted by doing real business when time must be focused on maintaining current information technology systems. Additionally, current assets purchased from vendors are being outstandingly under utilized. This includes all levels of technology, from an employee’s desk computer which may utilize only a fraction of its capabilities, to companies expensive servers which may be used at only 10 to 35 percent capacity.

The answer to over scaled and ineffective data centers is centralized supply in the form of a utility. Large scale utility computing is rapidly becoming a viable reality. IBM, Hewlit-Packard, and Electronic Data Systems are leading the way in offering computing power and information technologies as a resource purchased by a vendor. Virtualization, grid computing, and web services are key. Virtualization allows for applications to be run ‘everywhere’ despite platform or OS differences. Grid computing allows a large number of hardware components to act as a single device. Web services are intended to standardize interfaces between applications. The entire purpose of this IT movement is to turn disjoined systems into unified large scale infrastructures. The future of corporate appears doomed as computing becomes utility provided by big companies who provide an outstanding competitive edge to firms who utilize their services.

I think that this is the perfect article for this class. If we intend to learn about relevant and emerging technologies than understanding corporate innovations is essential. Essentially, we can see that there is a trend in the information technology world and this trend may change the way individuals perceive computing. My first thought is on what scale utility is computing feasible. Obviously, on a large scale, firm wide scale it is possible, but what about in a home? If a home has 3 personal computers would a centralized network with only pertinent software and necessary computing capacity reduce cost or increase functionality on a small scale. I can see that the cost of personal computers is low and the cost of such integrated IT systems are high but the unpredictability of the future makes me question all of our current IT related standards.

On a more relevant note, I think it is important to recognize the competitive edge ‘ending cooperate computing’ may provide a firm. Thus, firms must now decide how and when to take advantage of new technological opportunity. Perhaps those who are first to utilize these utilities will gain immensely, or they might suffer from emerging technologes that bring hardship. The best part of this article is knowing a new production possibility frontier may exist due to this information technology transition.

--------------------
Craig Sugiyama
--------------------
The article End of Corporate Computing written by Nicolas Carr is a very important one to understand because it is a prediction on what he believes will happen with our information technologies.

He first starts his article off with a parallel of IT to the revolution of electrification. He talks about how companies had to produce their own power because of the steep prices they would have had to pay from small providers. Innovation and central distribution led to these small utility companies being able to provide current for cheaper and to a broader amount of people. Companies now could save money and focus their revenues on their production. Carr then goes to talk about the shift in IT is happening the way the shift in electricity happened. Companies are now going to focus more on where they can buy this computing service from rather than a personal asset they must maintain.

Carr believes that if IT can be more of a “centralized” utility service then it will have a huge impact on companies. He believes that the companies such as IBM and Google will be the winners. They will be the ones providing these information technologies. Once corporate computing is slowly taken over by these large companies the smaller ones will be able to focus on their actual business not their IT infrastructure. Within the next five years it is predicted that companies will be buying computing not computers.

I thought this article was very interesting in the fact that IT computing may change from something that is produced within a company to one that is purchased from a specialized provider. I think it is very important to understand the economic potential that this concept Carr talks about has. I believe that the end of corporate computing means the beginning of large scale centralized infrastructures who will provide these services needed to allow companies to focus more on their direct activities.

Wednesday, May 21, 2008

Wiki Assignment

Post URLs to wiki here with your full names.

Sunday, May 18, 2008

Free $

Fahad Algahtani
--
In this article Chris Anderson explains the free business model and how many businesses have applied this model and how the economy in general is shifting to the “free” business model, which is providing services for direct costumes for free without affecting the business profitability. In the beginning of the 20th century, Gillette started giving razors away by bundling them with everything such as packets of coffee, tea, and spices for free, which increased the demand and sales of disposable blades. This was an example of one of the first free business models applied. Nowadays, different types of free have appeared, and here are some common examples of how free business model is applied:

1-The low-cost digital distribution led to a free web services such as webmail services and storage such as Yahoo mail.

2- Advertizing in public places and websites that provides services covers the costs of creating these services, which makes these companies provide their services for free, such as Google and MSN.

3- Cross-subsidies: this is giving a product away if selling something else.

4- Zero marginal cost: this is giving things that can be distributed without an appreciable cost for free, (e.g. online music).

I believe that advertizing is the most efficient way for business to cover their costs and start providing free services to customers. In other words, a company cannot depend on the revenues generated by selling costumers’ information to others in order to cover its services costs. I also think that 90% of the free services are in some way related to web companies and technologies, which is a key factor in the free business model success.

Max Schoenrock
- -
In my article, “Free! Why $0.00 Is the Future of Business,” Chris Anderson talks about how and why businesses today are selling their services and products for virtually nothing. Companies such as Google and Yahoo are two businesses that sell there service and access for absolutely nothing. E-mail, searching, maps, web page creator, these are all things that are available to us as internet users. So you may ask yourself, why do they do it and how do they make any money?

The main reason as to why this notion of “free” is the only way to be successful starts with the term “cross-subsidy.” In the article, Chris Anderson uses the well known brand Gillette as his model to explain this term. Basically what King Gillette did was give away free shaving razors to people in hopes that people would start using them then buy new razors and shaving cream from Gillette after their blades became dull. In turn, people started using these free razors, they became used to them, and then they started buying more razors and other Gillette products that went along with it. This idea sprung a whole new approach towards selling products and services to consumers. For example, Google gives away free access to their web site because they know outside businesses will pay Google to advertise their products/services to the billions of users they attract.

I found this article very interesting because I always wondered how companies make their money by giving away free things. Google was my favorite example because I remember when they first started out and now they are one of the top, most viewed web sites in the world. It all makes senses now. To compete with rival businesses, companies like Google have to offer there service free because if they don’t, then Yahoo will dominate the market because their service is free. It all comes down to who attracts the most users and how much free service can this web site offer me without asking for my credit card number. Free is the future of business on the web. More and more companies will soon be adapting to this idea as time moves forward.

ABCs of eCommerce

Joaquin Chapar
--
The article is basically an faq set about what Business to Consumer [B2C] Electronic-Commerce [E-commerce] is, a little about its history, transition and position in the business of today. First B2C differs from Business to Business [B2B] because as it is stated they are both different types of business. The hype about B2C is that it would take out brick and mortar sellers when it started in the 1990’s. Now B2C companies are being more team players with their IT department compared to before. The challenges of a B2C is to create/substitute/replicate the experience the customer gets when going to a physical store, they range from ease of use, personalization, fulfillment [as mentioned] and an enjoyable experience. Cutting the middleman is a problem in B2C because the web can bring the product to the consumer’s doorsteps, sometimes products online ‘sell them selves’ and this is a channel of conflict. B2C continues to be profitable and is expected to grow 10% by 2010. Rules and regulations still apply in B2C as a privacy policy but there are different taxation rules on the internet. In a sense B2C is ahead of its time because right now there are no rules that regulate it completely.

I think this article explains the ABC’s of B2C well, it does its job. It is very true that B2C e-commerce is strong and becoming more profitable. This can be seen in the many advertisements that companies make money from. An example and I am not sure exactly if these are B2C e-commerce but that nonetheless lead consumers to buy in other websites. Sites such as google, myspace, facebook that in fact do show you a look at the products in other sites and so they make revenue. This is all connected because in my opinion this is how the web has grown out of B2C, and now B2C are jumping on it even B2G or G2G because they see the channels that B2C has opened on the internet.

Eun Mi Kwon
--
E-commerce was meaning of online transactions, B2C was “Business to consumer”. It’s not only online just shopping, but online banking, travel services, auctions and so on. And even though there were some problems about stocks, the expectation of increasing consumer number is quite positive. Actually, for many companies, developing web was hard, but now it? ?/SPAN>s getting easier.

The difference between B2C and B2B
B2B customers are companies and needs more security.
B2C customers are individuals.
Retailers don’t have to face with haggling, delivery and product specifications problems. Also they don’t have to communicate with human intervention.

As the stock prices of some of the early pure plays went through the roof, the surrounding B2C e-commerce was hyped a lot, at the beginning. Also, spun off companies couldn’t initial after Nasdaq dropped, and the over hype to B2C was also disappeared. Nowadays, the fastest growing sites are that they’ve put more value to their Internet operations.

In the early days, initiative of B2C was done by separate IT department. Increasingly, IT department are joining to companies.

The Major problem of B2C is
Focus on personalization
Creating easy application for customer service in more live way, like chat or phone
Focus on making your site easy to use
Fulfillment of the customers require

Channel conflict occurs when a manufacturer or service provider bypasses a reseller or salesperson and starts selling directly to the customer. But now, people aware websites are useful and companies devoted resources to retail as part of their branding and commerce.

B2C E commerce sites are really profitable, and it’s making money more and more. However, each of you should know their privacy policy. At least, not to put your company in bad lawsuits. Also, you should take care your tax on internet sales but there are moratorium.

I though there are just positive way in e-commerce, and it might be really easy to initiate but actually it was not, and I was still one of the person who has greatly hyped for e-commerce’s incoming. Actually it needs much more specific marketing, and especially for big company they should take care their brick-and-mortar companies. Also, tax problem is pretty complex, as every costumer live in different states, I thought it could be hard to make fair law for every customers. And as e-commerce developed well, I think privacy policy should be emphasized more. Also, whatever happen in there, I think e-commerce is really good way to earn money by the smallest invest.

Friday, May 16, 2008

Office Live Basics Assignment

Please post your URLs here.

Friday, May 9, 2008

The Long Tail

Kim McNeely

The long tail is a business strategy that allows consumers to find pretty much whatever they want, whenever they want. It is based on the idea of having more than just the top ranked items in stock but also having those items that are hard to find due to their lack in popularity. Companies like Amazon.com and iTunes have allowed consumers to go online and find exactly what they want, whether it be a popular product at the time, or something else. iTunes even allows a consumer to purchase the single alone in stead of having to buy the entire CD. The long tail strategy makes it possible for online business to sell products that actual stores, for example Walmart, don’t have the capacity to sell. Walmart must sell a certain number of copies of a certain CD to be able to afford the space that that one CD takes up on the shelves. Online companies do not have to worry about inventory size and keeping up with the consumers demand in order to pay for the space that that item is taking up.

The long tail strategy is beneficial for the consumer as well as the company. The online company will save money by not having to rent space to sell their products. The consumer benefits because he or she has the ability to find whatever they want with little effort. Companies that sell more than just the “hits” or the top ranked items will attract a larger audience and make more profit. The long tail strategy has been a success due to the more broad options it offers to the consumers.

Alex Zabel

The long tale is getting in touch with the whole market rather than just the main market. The article stated that in a jukebox the songs that most people choose aren't the hit songs, it's the misses that make the money because there are so many of them. The idea of the long tale is that you must target everyone, and not just the main market. There are more people combined in these smaller markets than there are in the larger market. The companies that can allow people to find exactly what they want will profit the most. Amazon and ITunes are the best example because they allow you to choose exactly what you want and purchase it easily. Rhapsody finds someone to listen to each of their top 100,000 songs each month. They offer over 735,000 songs. This helps them reach the widest group of customers possible.

The long tale is a great way for any company to widen their customer base as well as increase their number of customers. Just going after a main market leaves so many potential customers within reach but not satisfied. While it doesn't work with every market it can be a great strategy to gain customers.

Justin Blackburn

Till the last century retailers have mostly been concerned with ‘Big Hits’ of the music, video, and book markets as opposed to the thousands of other no-name artists that are more niche oriented. For example, most brick and mortar stores will not carry a huge selection of documentaries because the amount of people looking to rent or buy this niche of products, in a certain geographic area, is too limited to justify the physical space for the product. This ‘Hit-driven economics’ is greatly fueled because the costs of carrying a product rarely outweigh the benefits.

Now online retailers are discovering that a majority of the total products that people want to buy are not actually top hits. Majority of online sales can be made up of the sum of many small purchases per niche. This is the Long Tail market. You can find thousands of practically undiscovered bands, books, and movies that many people will enjoy but don’t make a sizeable enough market to justify them in a Barnes & Noble store.

Online retailers can take advantage of this because they have (almost) unlimited storage space to put the millions of niche products. This is a huge profit margin that even oversees the market for Big Hits.

In the last two pages of his article, Chris Anderson describes three rules that utilizes the power of the Long Tail; 1) Make everything available. 2) Cut the price in half, now lower it. 3) Help me find it.

What stuck out in my mind was a quote from the second page, “Suddenly, popularity no longer has a monopoly on profitability. Now everyone can easily enjoy products that have only a minuet niche in the market. This will allow diversity to grow as consumers are able to find a great variety that appeals to them specifically. Products no longer must be mainstream to be able to be recognized.

Wednesday, May 7, 2008

The Rules of Innovation

From Brett Peterson - -

In Clayton M. Christensen’s article The Rules of Innovation, he sets forth four rules a company must abide by when developing innovative technology into markets. Christensen shows that implementing innovation is not as hard as is may seem, and by following these rules, companies big and small can see immediate profits.

The first rule, named “taking root in disruption”, is about implementing disruptive products, not just sustaining current ones. These disruptive products often appeal to different markets than the current ones, for example lower cost and simpler designs appeal to the everyday consumer as opposed to big businesses.

The second rule, “picking the scope needed to succeed”, relates to the degree which the company produces its product (any outsourcing). There are markets in which the product needs drastic improvements in terms of functionality, yet other markets need a product to be more simple and convenient; factors such as these are key in determining the scope of production in a company.

The third rule, “leveraging the right capabilities”, simply refers to a company’s ability to produce a certain product. Christensen asks three questions of managers: 1) Do I have the resources (managers, money, and technology) to succeed? 2) Will my organization’s processes facilitate success in this new effort? Processes are normally habitual in a company and difficult to change. And 3) Will the values of my organization allow for employees to prioritize innovation?

The fourth rule of innovation is to disrupt competitors, not the customer. Implementing a new product to a market or creating a new market completely generally requires a seamless transition for customers from the old technology to a new one.

Christensen’s four rules seem pretty straightforward and well enough, but the hardest part of the process is accurately and unbiasedly dissecting the company and the product to determine if it is profitable. It would be helpful to add a fifth rule, about getting an unbiased team to examine the product, company, and market; a team that doesn’t have any specific interest in the success of the new product. Other than that, the four rules are exceptionally well explained (although at times the technical jargon is a bit much) and seem like a sure path to success.
--------------------
From Chantelle Venezuela - -

“Taking root in disruption,” “the necessary scope to succeed,” “leveraging the right capabilities,” and “disrupting competitors, not customers” are the four variable sets that Clayton M. Christensen established for companies that were developing technological innovations to abide by to see immediate profits.

The first variable, “taking root in disruption,” goes over how not just to nurture and strengthen current products, but how to carry out disruptive products effectively. Two tests must be taken first in order to determine whether or not a market can be disrupted. In order for this to be successful, one out of the two tests must be fulfilled in order to move forward. The first test is, “does the innovation enable less-skilled or les-wealthy customers to do for themselves things that only the wealthy or skilled intermediaries could previously do?” And the second test is, “does the innovation target customers at the low end of a market who don’t need all the functionality of current products? And does the business model enable the disruptive innovator to earn attractive returns at discount prices unattractive to the incumbents?”

The second variable, “picking the scope needed to succeed,” goes over how the profitability of a new business and level of integration relate to one another. A situation is brought to the table as to whether a highly integrated company or nonintegrated company is likely to succeed when “determining by the conditions under which companies must compete as disruption occurs.”

The third variable, “leveraging the right capabilities,” goes over asking three questions: (1) “Do I have the resources to succeed? (2) “Will my organization’s processes facilitate success in this new effort?” (3) “Will my organization’s values allow employees to prioritize this innovation, given their other responsibilities?” There are two misconceptions that should be taken into consideration. The first being, that innovators must avoid money and leave the money to be dealt with by the corporations. And the second being, patience is a virtue and that “innovators should be patient about the new venture’s size but impatient for profits.”

The fourth variable, “disrupting competitors, not customers,” goes over how innovation can help customers do things they are already doing in a more simple way, which in turn would create more profitability for a company. On the other hand, if innovation is helping customers do things they are not trying to do, the company will fail. Watching and observing is the best way for a company to come to an understanding as to what their customers want and need in the company’s products.

Out of Christensen’s four variables I found that that fourth variable was one that I feel is the most straightforward from a consumer’s point of view, but how it may be difficult for companies to fulfill. For example, in the article, Christensen used textbook industry and how students say they would like “to probe more deeply into topics,” but really just trying to avoid reading a textbook at all. This is just one way where companies would invest their money and “new” innovation to appeal more to their consumers, but get little results because of the real interest that students have in reading textbooks.

Tuesday, May 6, 2008

The World is Flat Article

Joy Faerber's Summary please comment
In his 2005 article entitled, “It’s a Flat World, After All,” Thomas Friedman shows us how technology today is literally changing our lives. Technological progression has reshaped the way every human on earth does business, communicates, works and even plays. We are operating in a global economy, and it even affects people that have never touched a computer or cell phone.

Globalization has been evolving since Columbus sailed in 1492; first through countries searching for resources and imperial conquest, then through companies looking globally for markets and labor. Since the year 2000, globalization has evolved by individuals being empowered through technology. We now see groups of very diverse individuals pooling their knowledge and skills, which will bring us amazing new innovations worldwide. What caused this change in global society? It was the result of ten events that all came together around the year 2000. We call these events “flatteners,” due to the fact that in a sense, they flattened the world. The ten events were:

1. Fall of the Berlin Wall – allowed us the think of the world as one whole world.
2. Netscape browser – brought the internet to individuals and triggered the dot-com boom. India benefited more from the connectivity that Netscape afforded, than any other country.
3. Workflow – software applications, standards and electronic transmission pathways.
4. Outsourcing – jobs were sent to other countries.
5. Off-shoring – entire companies built plants in other countries.
6. Open-sourcing – collaborating together on software development and usage for free.
7. Insourcing – hiring outside companies to handle certain core logistics for a business.
8. Supply-chaining – create a supply chain down to the last widget, from manufacturing to retailing.
9. Informing – allows anyone to collaborate with, and mine, unlimited data.
10. Steroids – a funky name given wireless access and VoIP; such as PDA’s and smart phones.

This “flattening” of the world is causing business strategies to be created more and more through collaboration within companies, between companies, and with individuals, rather than by bloated administration of big businesses. Thomas Friedman says, “It is this convergence of new players, on a new playing field, developing new processes for horizontal collaboration that I believe is the most important force shaping global economics and politics in the early 21st century.”

What stands out in this article for me is the point Friedman makes about America falling behind other countries, such as China and India among others, in technological innovations. America has been the leader in innovation over the last century, and perhaps that has made us too comfortable. People in countries like China have nothing to lose, so they are eager to leap right into new technologies, and work hard to create new innovations. We have to realize that if we don’t embrace new innovations, and prepare our children – the next generation of U.S. citizens – to create and produce new technologies and ideas, they will not enjoy the same benefits and living standards that we have had for so many years in this country. Friedman refers to this as a “quiet crisis that is slowly eating away at America’s scientific and engineering base.” Let’s not let that happen!

Blogging on the Articles

Main Posting:

For each reading you are required to post a 200 word summary and critique of your assigned article. This should include about 100 words on a summary of the article and a 100 words critique of the thesis. You will do this once during the term.

The post will be under the class blog (WSUmis171.blogspot.com) under each article area.

Due Date: Sunday at 11:55pm before the week’s reading.

Grade: 50 points

Comment Postings:

You are required to post a comment about the reading for EVERY article. This should be a couple sentences that can be of one of the following forms:
1 – Interesting addition to the article from outside material
2 – Your thought about the article
3 – A comment on a Main posting from a student
4 – A comment on another comment

Due: Before class on the article discussion day.

Keep this professional!

Grade: 5 points per posting (50 total points).

Blog/Google Pages Assignment

Post your URLS to your blog AND your GooglePages Assignment as one comment here.

Web 2.0 Article

Today’s successful web use demonstrates a fundamental shift in web application development and business model. Web sites that are used, that are relevant, and that generate revenues are those sites which have adapted to the new design patterns and business models dubbed “Web 2.0”.

What strikes me mostly from the article is how the web has moved from merely a new media into being the actual software platform. Instead of all software and user activity being dominated by software release platforms such as Microsoft’s Windows, the new model of business views the web as the platform and strives to develop services on that platform to deliver user-relevant content. This new model has profound impacts to business model, including the elimination of “software release cycles”. Web 2.0 shows continuous improvement, even daily updates transparent to users.

Sites which also provide the most relevant content to users are also the most successful. What is key to providing that content is in tracking and allowing control to users. Amazon, Wikipedia and others demonstrate how user modifications, user content, and user linking with new technologies, such as RSS feeds, drive the most successful sites.

Another key power in Web 2.0 is chasing the “long tail” instead of going only after the “head”. In other words, because there are so many small individual users and niches, those who cater to these many small end and fringe users will over shadow just a few end users in the large obvious markets.

Since service and user content are the drivers in Web 2.0, it also follows that being a data provider, and being able to deliver that data well, are characteristics of Web 2.0 sites. Instead of attempting to provide a controlled software application, these successful sites provide information; information that is useful and available to meet users needs. How better to meet that goal than to monitor and involve users themselves in selecting that data?

A side thought to me is how this move in Web 2.0 corresponds to the flat world concepts presented in the previous module. Because users drive Web 2.0, and now the flattening of the world levels the playing field to many cultures, future Web 2.0 development will very clearly be heavily influenced by users from around the world and not only from Western cultures. Service providers who can best meet the global user needs, will be the most successful over the coming years.

Hello world

First Blog Entry