MDD의 정의 잘 표현...

posted Jun 20, 2010, 9:02 PM by Kuwon Kang

JET is typically used in the implementation of a "code generator". A code-generator is an important component of Model Driven Development (MDD). The goal of MDD is to describe a software system using abstract models (such as EMF/ECORE models or UML models), and then refine and transform these models into code. Although is possible to create abstract models, and manually transform them into code, the real power of MDD comes from automating this process. Such transformations accelerate the MDD process, and result in better code quality. The transformations can capture the "best practices" of experts, and can ensure that a project consistently employes these practices.

However, transformations are not always perfect. Best practices are often dependent on context - what is optimal in one context may be suboptimal in another. Transformations can address this issue by including some mechanism for end-user modification of the code generator. This is frequently done by using "templates" to create artifacts, and allowing users to substitute their own implementations of these templates if necessary. This is the role of JET

Creating Open Web APIs: Exploring REST and WOA in Rails 2.0

posted Jun 20, 2010, 7:19 PM by Kuwon Kang

In my recent 2008 predictions for the future of Web services and open APIs for enterprise applications, I said that we'd finally see a large scale movement to newer, lightweight Web-based models for opening up our software systems and integrating them together. In other words, heavyweight SOA has finally fallen out of favor and lightweight SOA -- sometimes known as Web-Oriented Architcture (WOA), is in.

However, this sea change has long since taken place on the Web and this year will see best practices in this area take another major step forward as we'll examine below. The recent convergence of the Web, SaaS, SOA, and other approaches has also made the boundaries between our architectures and systems increasingly intertwine and blur. As part of this evolution, we have also watched the gains that successful firms like Amazon and Facebook have made by opening up their products on the Web. And strategically, as an industry, we've begun to find it a lot smarter to think in terms of reusable, interconnected open platforms instead of single-play software applications. Along this journey, we have begun a major return to the roots of the globally linked structure of the Web.

The rest of this post consists of two sections. One conceptual and one technical:

The daily reinforcement and continuous growth in the fundamental power of HTTP and URL link structure, which is directly driving the Web's overall network effect, has starting giving rise to a new generation of software architects and product designers. This generation has grown up deeply influenced by it and they tend to think about the creation of software in novel, new, highly Web-oriented ways. Though the classical software industry has a long and proud heritage of its own around methodologies, architectural approaches, and design patterns -- proven in the crucible of real-world implementations of years past -- in this decade the Web has managed to exert its own unique, irresistible, and pervasive influence on virtually all aspects of producing software. For example, agile processes have been pushed to the limit and beyond by the forces imposed by the realities of thePerpetual Beta. And the scale of even average sized applications on the Web are now the largest we've ever seen. The absolute necessity of cost-effective operations and the marketplace requirements of embracing the new business models for Web 2.0 applications -- including advertising, user generated content, and rich user experiences -- have also changed the fundamental technical and commercial ground rules for success. As a whole, these changes have been driving a need for new software platforms that are explicitly designed to help us efficiently produce scalable, compelling online applications while also addressing the reality of modern-day Web apps.

Many of us who have to create the next generation of Web applications have been taking a hard look at the new platforms that have been created for the modern era of very large-scale networked software applications. And I'll be very clear here: While a great many of the old ideas and techniques in software development are as applicable today as they were ten years ago, there are also an whole new set of constraints and enablers for which we have to be very good at optimizing. As the Web begins its 2nd major wave of maturity -- and depending on who you listen to -- there is a considerably less tolerance for older, inefficient methods for developing Web applications; vigorous online competition for marketshare and increasingly online-savvy businesses have a much better sense of what is possible and how much it should cost and when it should be delivered.

These factors as a whole have pushed us into a new era of productivity-oriented platforms that started years ago with languages like Tcl and Perl and quickly moved on to Python, PHP, Ruby. Ultimately we ended up where we are today, with advanced, highly-efficient frameworks for these languages such as Ruby on Rails and CakePHP. These tools now let us create Web applications literally 10 to 20 times more efficiently than the general purpose language platforms of the 20th century, and with both traditional software engineering as well as new Web 2.0 best practices already built in. These improvements have only spurred what can only be called a "radical" movement in the software business, which started with open source software (the peer production of software) starting in the 90s, and where we have arrived, with a dramatic departure from the way we used to look at software languages and platforms in terms of how vertical a software development platform could be before it lost general appeal.

These new efficiency gains and vertical focus, however, are almost exclusively aimed at the twins goals of developer productivity and good design. These are both admirable and important goals since programmer time always been one of the leading costs in producing software. Software applications also spend most of their lifetimes in maintenance mode and clean application architectures from the outset can greatly facilitate updates and revisions. However, over the same time frame, the run-time efficiency of our programming environments, partially obscured by a little help from Moore's Law and Nielsen's Law, has been in a major decline. This has been largely intentional, when it comes to supporting improved developer productivity, or entirely unfortunate, such as the general failure of the software industry to figure out how to help software designers fully leverage the now ubiquitous generation of multi-core processors.

Out of all this there has grown a distinct and growing tension between the need to rapidly and inexpensively produce quality software and the requirement for it to scale cost-effectively to millions of users. The simple fact is, which you can readily see in the Hard Metrics diagram to the right, is that the previous generation of programming languages and platforms is up to 40x faster than what many of would prefer to use today to develop Web applications. Yet the more you go to the left on both diagrams, the more that programming platform becomes extremely expensive and time-consuming to develop with. Why is this? There are two primary reasons.

One is that the more popular, older programming languages tend to be relatively low level and general purpose and were designed for a different, older set of constraints. This has given us baggage that is often not very applicable to the modern Web-based world. Second, we've become very good at understanding the idioms and "syntactic sugar" that makes developers more productive for Web development and we've put that into the latest generation of programming languages and Web frameworks. Unfortunately, the combined newness of these new Web development platforms and their preference for coding time efficiency in favor of run-time efficiency has conspired to make the results they produce relatively slow and resource inefficient compared to what is potentially possible. Newness in this case is also a kind of performance tax since we just haven't had enough time learning how to make these new platforms perform well at run-time, similar to early versions of Java before the advent of the Just-In-Time (JIT) compiler. Fortunately, efforts like Ruby .NET have made some notable headway in this space recently, but are not commonplace yet.

The intent of the rest of this article is to explore the new release of Ruby on Rails 2.0 and examine it in the context of the trends above. The ultimate 10 million dollar question in the Web development platform arena is: Are the developer productivity benefits, including the embodiment of many current Web application best practices, that are conferred by new generation Web development platforms like Rails worth their cost in terms of operational efficiency? Increasingly, whether you're a corporate IT executive or a programmer at an Internet startup, you're going to be facing this difficult decision when you choose your target platform. Questions like "is it worth 5x-10x the programmer time to get run-time efficient software?", or "should I just increase the investment in more processor cores and bandwidth in the data center?" will keep you up at night. Making the wrong choice has potentially serious long-term consequences in terms of what it will ultimately take to maintain and operate your application. A programming platform's implications for operations are particularly pronounced since Web apps require more operational resources on the server-side the larger they grow, unlike traditional, installed stand-alone applications.

One way of thinking about the problem is that it's almost never a good idea to bet against significant improvements in computing and network bandwidth. So far we've not yet seen much to indicate that large, regular improvements won't continue for the foreseeable future. Another is assuming that a platform should be used in a slavishly monolithic fashion for an entire application. In fact, as an insightful interview with Alex Payne, a lead developer of one of most well-known Rails success stories, Twitter, shows it often makes sense to move the slowest parts of the app into something faster. This is such a common situation in software development that it's long been codified as the Alternate Hard/Soft Layers pattern. And while these two considerations alone will go a long way towards helping one decide which direction to take, one must also look to where the industry is going as a whole. The new productivity-oriented platforms are here to stay and adopting strategies to use their strengths effectively while being proactive in addressing their weaknesses, is the best route to success with 21st century Web applications.

Where's the interface? REST doesn't have a contract description language and essentially uses duck typing.Read about best practices for WOA/Client development.

So all of these issues form the lens through which we must look at the modern Web 2.0 applications arena. But let's take an actual look at what we're talking about here. How efficient can these new development platforms really be? And do they are actually encourage us down the right paths in term of modern best practices in the Web 2.0 era? Let's validate this by actually building, hands-on, an entire Web application using one of these new productivity-oriented programming platforms, specifically using the newly released Ruby on Rails 2.0. Those following along will need a little bit of technical skill, but you'll see that these new platforms are tremendously efficient from a developer perspective. In fact, we'll have an application up and running in literally a few minutes after you get your Rails 2.0 environment installed.

Building a WOA-compliant Web Application in Rails 2.0

We're about to get our hands on Rails 2.0 and build a complete data-driven Web application. But first we have to understand a little bit about REST and WOA since that's the "return to the roots of the Web" story I alluded to in the beginning. Nick Gall originally coined the term WOA, which he defines for us here. It's also called a resource-oriented architecture, but at the core of both conceptions is an approach called REST, which I've previously defined with specifics for those of you not familiar with it. But the key idea is that REST is just a way of using the fundamental protocol of the Web, the Hypertext Transport Protocol (HTTP), to exchange information with anyone else on the Web. REST treats the information on the Web as URL-addressable resources, which includes traditional Web pages but also pure data including XML, video, and audio. REST, which is really just a style of using HTTP, leverages the architecture of the World Wide Web in a natural, organic manner. In other words, REST is the best way we currently know of to open up our Web applications to the rest of the world, an approach I have called the Global SOA in the past. 

In contrast to object-oriented models for software, or the procedural models used by traditional Web services such as SOAP, REST only uses four methods, those built into HTTP itself: GET, POST, PUT, and DELETE, which themselves operate on data resources located at URI endpoints located on Web servers holding the data (typically a relational database under the covers). Consequently, REST applications tend to have a much larger (and transparent) set of surface area dependencies directly on sets of addressable Web data instead of on bundles of procedural methods through which XML schema instances are passed.

Since platforms like Rails embody many of our latest ideas about how best to develop for the Web, it should come as little surprise that the principle creator of Rails, David Heinemeier Hansson, recently observed that the latest release of Rails consists mainly of "a slew of improvements to the RESTful lifestyle." One of the most remarkable things about Rails is how it pays more than lip service to this essential resource-oriented view of the Web. As you shall see, since open APIs are one of the hot topics in the Web applications business at the moment, it's nice to know that every Rails app automatically gets its very own RESTful API. So, let's see this for ourselves...

Step 1: Getting on Rails 2.0

To explore developing a Rails 2.0 app and creating/using its open Web API, you'll need to install four pieces of software on your test computer.

  1. Install the Ruby programming language. Version 1.8.6 is highly recommended. Here are links to the Ruby Windows installerMac OS 10.4 instructions. Ruby is already installed standard on 10.5 (Leopard). 
  2. Use the Ruby Gems updater from a command-line or terminal session to pull in Rails 2.0 from over the Internet:

    gem install rails -y

    You'll know you did this right if the output of the command:

    rails --version is:

    Rails 2.0.2 or higher. Warning: If you have an earlier version of Rails installed, it will be upgraded automatically. 

  3. Install a database of your choiceMySql or Postgres are recommended but even SQL Server or Oracle will work just fine, though you will probably have to install their Gems separately. Rails is for designing database-driven Web apps so make sure write down your user account and password for the database. Make sure you've created a named database instance. Don't worry about application tables for now, we'll have Rails take care of that for us later. 
  4. Connect the database to your Rails application. First we create a skeleton app and then we'll tell it about our database instance. Note: Getting the database connection information and credentials right after you install everything is often the hardest part about getting Rails up and running.

    First, let's create the skeleton application we're going to be using for the rest of our work. Go to a local directory of your choice and type the following from a command line:

    rails railsapp

    This will lay down the entire application structure for a Rails app, including a built-in Web server, WEBrick.Inside railsapp there will be a config directory with a file called database.yml. Open it in your favorite editor and fill out the development: section of the file with your database credentials and save it.

    Start WEBrick in a separate command-line instance by typing the following, and be prepared to stop and start it occasionally as certain application changes are made:

    cd railsapp
    ruby script/server


  5. Download cURL, a command line HTTP invoker. Put it in a path you can reach from your command line instance. We'll be using cURL to simulate a RESTful Web API client and invoke our Web application's REST API upon a data resource we've previously created via as a user the HTML interface we've built in Rails.

Now we're ready to start development and testing of our Rails 2.0 application. Keep WEBrick running in one command line window so you can see its debug output (it will show you all the HTTP requests that go back and forth), and have another command line in the railsapp directory ready to invoke various Rails commands.

Creating Our Open Web Application in Rails 2.0

For the purposes of this demonstrator, we're going to build a very simple employee tracking application in Rails. We're going to use the newer syntax in Rails for easy creation of a full Web application with employee record creation, viewing, updating, and deletion. Rails will even create the database tables for us, all the user interface screens (albeit they will be unstyled), unit tests, and even a complete RESTful API, our final end goal.

Astonishingly, we're going to create all of this using only two short commands at the command-line. You'll see why Rails is one of the most productive Web development platforms available and this step in particular shows some of the radical ease of use that Rails proponents (myself included) are consistently impressed with.

Step 1: Create the basic employee tracking application

Rails use a well-designed Model-View-Controller architecture and in our first command, were going to ask it to create all three items for us for our employee tracking database as well as matching unit tests and cross-platform database scripts. To keep it simple, we're only going to track two fields for the employee: their name and their extension. You are welcome to add additional fields but will have to deal with the additional fields in the steps below.

In the railsapp directory, type the following command:

ruby script/generate scaffold emp name:string extension:integer

You will see a lot of output showing the files that the Rails framework creates to handle the employee data we've just specified. The command itself invoked the generate facility of Rails to create a new scaffold for a model called 'emp' which has two fields, an employee name typed as a text string and an extension typed as an integer. A scaffold is an initial, working application skeleton with basic functionality including database persistence and a matching set of HTML forms for the CRUD.

Now, believe it or not, the employee tracking application is now mostly finished, the only thing we need to do it to update the database so that it has the schema for the employee records. You can use Rails' rake facility to get this done. This will require that you have correctly set up your database.yml file, and you will have to debug any connection issues to get this step to work. To migrate our employee model, named 'emp' to the database, type the following in the railsapp directory:

rake db:migrate

Now, you can run the application by going to the emps directory on the WEBrick instance. Note that emp as been pluralized automatically by rails, so our applications is located at the emps endpoint. To access our new Rails 2.0 Web app, point your browser to:


You should see the listing screen of the employee tracking database that looks like the one below:

Click on New emp to create a new employee, and enter the data as seen below (aside: Roy Fielding is co-inventor of HTTP and the person who created the original vision around REST), and click on Create.

The data entered is then transmitted from the browser to the server and stored in the back-end database. You can then view it, destroy it, or add more employees with the user interface that was generated for us by Rails.

We've now completed a simple but fully functional Rails application from beginning to end. But what we've come here to see is the fully RESTful open Web API that was created for us along the way. For this we'll need to use cURL to issue the API calls via HTTP to simulate another online program integrating live with our Web application.

Step 2a: Invoke the REST API to GET the employee resources

Now we're going to exercise all the HTTP verbs on our open Web API to see how it works. The diagram below shows the overall lifecycle of a REST-based resource using our emp example. The good news is that Rails automatically offers URL addressable resources for all the data in a Rails Web application. This access can be controlled and channeled as needed but it's open by default for whichever views already have visual access via HTML forms. This means Rails developers get a RESTful API for their applications simultaneously as they develop their user interface.

Let's go ahead use the REST API to pull the data for the employee that we added above. We'll use the handy HTTP utility, cURL, to interact with the Rails application via HTTP. Note that the URL we'll use now has the '.xml' extension added to it. This tells Rails that we're trying to access the XML representation of the resource instead of using the HTML user interface (in other words, we're playing the role of a program instead of a human user.)

curl http://localhost:3000/emps.xml

Enter the text above in a local command-line or shell with the cURL binary in the execution path. You should see output that looks similar to the following below. It's an XML representation of the employee data in list format, pulled fresh from the server via the REST API, with the emps tag as the enclosing list structure holding individual emp instances.

<?xml version="1.0" encoding="UTF-8"?>
<emps type="array">
    <created-at type="datetime">2008-01-11T01:02:53+01:00</created-at>
    <extension type="integer">1234</extension>
    <id type="integer">1</id>
    <name>Roy Fielding</name>
    <updated-at type="datetime">2008-01-11T01:02:53+01:00</updated-at>

It's idiomatic in Rails to use the id attribute as the primarily key for application data. In fact, this convention is required for a lot of the magic in Rails to happen automatically and the rake migration way back in Step 1 already took care of adding this column to the database for us. That means we can use the id as the final addition of our employee resource URIs for updating, getting, and deleting individual employee resources.

Step 2b: Update an employee resource through the REST API via PUT

Let's go ahead and update Roy Fielding's phone extension through the REST API. Since we can tell from the employee list above that Roy's id is '1', we can use that to let the API know which record we'd like to update. You only have to send two parts of the resource in the API call, the id and the attributes we'd like to update.

Create a file called put.xml with the following contents:

<?xml version="1.0" encoding="UTF-8"?>
  <extension type="integer">5678</extension>
  <id type="integer">1</id>
  <name>roy fielding</name>

Invoke cURL with the following parameters to actually update the phone extension in the resource on the server (and consequently in the database.) The -H parameter sets the header so that Rails knows that an XML representation of the resource is being sent to it. -T makes the HTTP invocation a PUT operation, and the URL of the resource is http://localhost:3000/emps/1.xml where the number 1 corresponds to the id of the resource:

curl -v -H "Content-Type: application/xml; charset=utf-8" -T put.xml http://localhost:3000/emps/1.xml

Step 2c: Add an employee resource through the REST API via POST

Now we'll add a new employee to our application over the network using the REST API. This employee will be Tim Berners-Lee, so we'll create another XML file called post.xml that looks like the following:

<?xml version="1.0" encoding="UTF-8"?>
  <extension type="integer">1212</extension>
  <name>Tim Berners-Lee</name>

To send this via a POST operation through the REST API using cURL, issue the following on the command line. The --data-asciiparameter identifies the file to send via HTTP to our REST API. Because the resource does not yet exist, the URL is the base of the resource type, http://localhost:3000/emps. Rails conveniently returns the XML representation of the added resource so theid generated on the server for the newly added record can be obtained in the client without a second call to the server. Add Tim Berners-Lee to our employee tracking application via the API:

curl -v -H "Content-Type: application/xml; charset=utf-8" --data-ascii @post.xml http://localhost:3000/emps.xml

A browse of the employees list via cURL or the employee tracking apps Web forms will see that Tim Berners-Lee has now been added to the application, including the database, via the REST interface.

Step 2d: Delete an employee resource through the REST API via DELETE

Now we'll go ahead and remove Roy Fielding from the database using our REST API. This process is straightforward and uses the HTTP verb DELETE. You can issue this via cURL using the following command:

curl --request DELETE http://localhost:3000/emps/1.xml

You can now verify through the employee tracking Web forms that Roy Fielding's employee record has been permanently removed from the database.


We've seen how Rails 2.0 makes it enormously simple to create a database-driven Web application, expose it via a REST API, and manipulate it via a REST-capable client in a clean, no-nonsense manner. Developing similar capability in C++, Java, or .NET environments is currently much more difficult. What you see above however, is only the beginning; Rails 2.0 has added a lot of other support for more sophisticated uses of REST and HTTP. I'll cover these in one of my upcoming posts as soon as I am able. The key point here is that the next generation of Web application platforms puts almost staggering amounts of power in the hands of the average Web developer while providing powerful capabilities like properly formed REST APIs automatically. This further puts the latest best practices for Web apps into places it wouldn't otherwise happen. Open APIs will help power the next generation of online success stories and for this and other reasons, Rails should be on the short list for those considering new Web development efforts. That is, only if they are prepared to do what's necessary to address Ruby's and Rails' shortcomings in run-time performance.

Still trying to exactly understand why Rails is such a compelling option? Read an analysis of why platforms like Rails are a major improvement over previous generations of Web application platforms.

If you have any trouble getting the code to work, please contact me at

DAO vs Repository

posted Jun 20, 2010, 7:18 PM by Kuwon Kang   [ updated Jun 20, 2010, 7:19 PM ]

DAO는 우리가 아주 흔히 들을 수 있는 패턴중의 하나이다.

이에 대한 자세한 내용을 알기 원하면 Enternity님의 아래 블로그를 읽어 보아라.

하지만 내가 관심이 가는 것은 Repository이다.

아직까지 DDD(Domain Driven Design)를 해보지 않아서 DAO와 구분하여

어떤 기술접근 방법이 나을지 정확히 말할 수 없으나 전체적인 관점에서 보면 Repository가 더

좋을 수 있을 거란 생각이 든다.

Architecture Documentation, Views and Beyond (V&B) Approach

posted Jun 20, 2010, 7:11 PM by Kuwon Kang

카메기 멜론에서 제공하는 아키텍처 도큐먼팅에 관한 글과 자료.

쓸만한 아키텍처 자료를 어떻게 만들고 그 개념과 템플릿을 제공합니다.아래는 SDN에서 제공하는 Archiecture Documentation에 관한 세션자료입니다.Java one에서 소개되었던 자료입니다. 

10 Must-Know Topics For Software Architects In 2009

posted Jun 20, 2010, 7:11 PM by Kuwon Kang

Lightweight Processes, Service-Orientation, Enterprise Architecture, and Software Development

Software Architecture in 2009In the last year or so, after quite a lull, the software architecture business has gotten rather exciting again. We're finally seeing major new topics emerging into the early mainstream that are potential game-changers, while at the same time a few innovations that have been hovering in the margins of the industry are starting to break out in a big way.

The big changes: The hegemony of traditional 3 and 4-tier application models, heavyweight run-time platforms, and classical service-oriented architecture that has dominated for about a decade is now literally being torn asunder by a raft of new approaches for designing and architecting applications.

These might sound like incautious words but major changes are in the air and architects are reaching out for new solutions as they encounter novel new challenges in the field. As a consequence, these new advances either address increasingly well-understood shortcomings of existing approaches or add new capabilities that we haven't generally focused on before but are becoming increasingly important. A few examples of the latter include creating reusable platforms out of applications from the outset (the open API story) or cost-effectively creating architectures that can instantly support global distribution, hundreds of terabytes of data, and tens of millions of users. There are others that we'll explore throughout this post.

These innovations are hallmarks particularly of the largest systems being built today (which are running into unique challenges due to scale, performance, or feature set) though these software advances are also moving across the spectrum of software from everyday corporate systems and Internet applications to new mobile devices and beyond, such as the emerging space of social networking applications.

Mainstays of application architecture such as the relational database model, monolithic run-times, and even deterministic behavior are being challenged by non-relational systemscloud computing, and new pull-based systems where consistency and even data integrity sometimes take a backseat to uptime and performance.

Let's also not forget about Web 2.0 approaches and design patterns which are becoming ever more established in online applications and enterprise architecture both. Social architecturescrowdsourcing, and open supply chains are becoming the norm in the latest software systems faster than expected in many cases. Unfortunately, as a result, the architectural expertise needed to effectively leverage these ideas is often far from abundant. 

To try to get a handle on what's happening and to explore these emerging topics, I've been doing conference talks lately about the transformation of software architecture that we're beginning to see in so many quarters these days and generally finding consensus that the exciting days of architecture are back, if they ever left. Now it's up to us to begin the lengthy process of taking many of these ideas into our organizations and integrating them into our thought processes and architectural frameworks and bringing them to bear to solve problems and provide value. As one software architect came up and asked me recently, "How do I get my organization to understand what's happening out there?" This is an attempt at addressing that question.

Here's a list of the most important new areas that software architects should be conversant in and looking at in 2009:

10 Must-Know Topics for Software Architects in 2009

  1. Cloud Computing. This one is easy to cite given the amount of attention we're seeing in the blogosphere and at conferences, never mind the (considerable) number of actual users of popular cloud services such as Amazon EC2. While the term doesn't have an exact definition, it covers the gamut of utility hosting to Platform-as-a-service (PaaS). I'vecovered cloud computing on ZDNet in detail before and broken down the vendor space recently as well. While the economics of cloud computing can be extremely compelling and there is undoubtedly a model that will fit your particular needs, cloud computing is also ground zero for the next generation of the famous OS platform wars. Walk carefully and prototype often to get early competency in an architectural advance that will almost certainly change a great deal about the software business in the near future.
  2. Non-relational databases. Tony Bain over at Read/Write Web recently asked "Is The Relational Database Doomed?" While it's far too soon to declare the demise of the workhorse relational database that's the bedrock of so many application stacks, there a large number of promising alternatives emerging. Why get rid of the traditional relational database? Certain application designs can greatly benefit from the advantages of document or resource-centric storage approaches. Performance in particular can be much higher with non-relational databases; there are often surprisingly low ceilings to the scale of relational databases, even with clustering and grid computing. And then there is abstraction impedance, which not only can create a lot more overhead when programming but also hurts run-time performance by maintaining several different representations of the data at one time during a service request. Promising non-relational solutions include CouchDB, which I'm starting to see in more and more products, as well as Amazon SimpleDBDrizzle (from the MySql folks), Mongo, and Scalaris. While many applications will continue to get along just fine with relational databases and object-relational mapping, this is the first time that mainstream database alternatives are readily available for those that are increasingly in need of them.
  3. Next-generation distributed computing. An excellent story today in the New York Times about Hadoop provides a good backdrop on this subject: New distributed computing models are moving from the lab and becoming indispensable for providing otherwise difficult to harness computing power when connected to previously unthinkable quantities of data. While traditional request-response models that are the mainstay of network-oriented computing are important, so increasingly are effective ways to process the huge amount of data that are now common in modern software systems. Watch this video interview with Mark Risher and Jay Pujara at Yahoo that discusses how Hadoop "enables them to slice through billions of messages to isolate patterns and identify spammers. They can now create new queries and get results within minutes, for problems that took hours or were considered impossible with their previous approach." While Hadoop has considerable momentum, other similar offerings include the commercial GridGain and open source Disco and there are many others.
  4. Web-Oriented Architecture (WOA). WOA Application StackI've discussed Web-Oriented Architecture on this blog now for several years and my most complete write-up is here. In short, the premise is that RESTful architectures (and the architecture stack above and around it including data representation, security, integration, composition, and distribution) are a more natural, productive, and effective way to build increasingly open and federated network-based applications. The WOA debate has raged for a while now since it became a hot topic last year but the largest network on the world has cast its vote and WOA is the way that the Web is going by and large; WOA-based applications just align better to the way the network itself inherently works. In my opinion, it is a much better way to create service-oriented architecture for almost all requirements, resulting in more supple and resilient software that is less difficult and expensive to build and maintain. For enterprises considering the move to WOA, here is good overview I did a short while back about the issues and the evolution of SOA.
  5. Mashups. David Linthicum wondered today in Infoworld where the mashups have gone, clarifying that he believed they had become integral to SOA and for delivering value in enterprise architecture. In reality, while mashups are extremely common in the consumer space, to the point that it's just an every day application development activity, the tools and concepts are just now ready for prime-time in business. I've previously called mashups one of the next major new application development models and that's just what's happened. Mashups were also prominent in my Enterprise Web 2.0 Predictions for 2009 (item #7). If you're not studying mashup techniques, Michael Ogrinz's Mashup Patterns is an excellent place to start studying how they impact software architecture .
  6. Open Supply Chains via APIs. I find the term open APIs, which an increasing body of evidence shows are an extremely powerful model for cross-organization SOAs, to be confusing to the layperson so I've begun calling them "open supply chains." Opening up your business in a scalable, cost-effective manner as a platform for partners to build up on is one of the most powerful business models of the 21st century. However, there seems to be a large divide between native-Web DNA companies and traditional organizations in understanding how important this is (it's increasingly mandatory in order to compete online). All evidence so far points to this as one of the most important, though potentially difficult, things to get right in your architecture. Security, governance, scalability, and ease-of-consumption are all major subject areas and our enterprise architetures and SOAs must be ready for this business strategy as more and more organizations open up. Here's my recent "state of the union" on open APIs.
  7. Dynamic Languages. Though dynamic languages have been popular on the Web since Javascript and Perl first arrived on the scene, it's only been recently that it's become acceptable to develop "real" software with them. .NET and Java are still extremely compelling (and common) platforms for writing and running application code but it's dynamic languages like Ruby, Python, PHP, and now Erlang that are getting all the attention these days. Why is this? As I explored in a detailed comparison a while back, a trade-off in run-time performance has generally been found to enable a large boost in productivity by virtue of what this lets dynamic languages accomplish. It also doesn't hurt that a lot of work has gone into newer dynamic languages to make them extremely Web-friendly, which is now one of the most common use cases for any programming language. Dynamic languages have architectural trade-offs of course, like any technology, though increasingly frameworks like RailsCakePHP, and Grails are built on top of them which bring the latest best practices and design patterns, something that is not happening as frequently with older platforms. The tipping point has arrived however, and dynamic languages are beginning to take the center stage in a significant percentage of new projects. Software architects should be prepared.
  8. Social computing. Developers and software architects are often uncomfortable with social computing aspect of software systems today but Reed's Law has unequivocally demonstrated that the value of social systems is generally much higher than non-social systems. Or you could just look at the many popular applications out there that are driven by their social behavior and derive their (often enormous) value from the participation it entails. Whether this is YouTube, Facebook, Twitter, or thousands of other social applications (business and consumer both), the lesson is clear: Social architecture is an important new layer in the application stack and it I've since made it two entire quadrants of my view of Web 2.0 in the enterprise as a consequence. A List Apart has a great introduction to The Elements of Social Architectureand I've identified some of the core patterns for this in my Enterprise 2.0 mnemonic, FLATNESSES. Fnding a high-value place for social computing in our enterprise architectures will be essential for modern software efforts.
  9. Crowdsourcing and peer production architectures. Increasingly, the public network (the Web) has been used to enable potent open business models that are beginning to change the way we run our businesses and institutions. This started with open source software and has since moved to media and is now encroaching on a wide variety of industries. The models for doing this online require software architectures that can support this including architectural models for harnessing collective intelligence, moderating it, aggregating it, and protecting it and the users that provide it. As I wrote a couple of months ago in 50 Essential Strategies for Creating a Successful Web 2.0 Product, these architectures of participation create most of the value in the software systems that employ them. If you're not sure this is a software architecture issue, just look at Amazon's Mechanical Turk or CrowdSound, that latter which is a widget that allows even end-users to dynamically include crowdsourcing into their applications. You can also read John Tropea's new exploration of this topic for an application layer viewpoint.
  10. New Application Models. The Semantic Web seems to be on the rise again and I've already covered Platform-as-a-service and mashups here, but in addition to these we are seeing entirely new application models cropping up in scale online. Whether these are Facebook applications, next-generation mobile apps (iPhone, Android, RIM, etc), OpenSocial or just the increasing prevalence of widgets and gadgets, the trend in the atomization of software (which was done still perhaps the best and most effectively so far in Unix) is reminding us that we still have new discoveries ahead of us. While these often seem trivial, aka applications as a feature, it's also increasingly clear that these are going to be here to stay and can provide considerable point value when they're designed correctly. Certainly for next-generation intranets and portals as well as the online "desktop", micro-applications which have to contend both with scale and with being useful and secure while embedded in other applications is increasingly on the radar. Know how they work, why they are so popular (there are tens upon tens of thousands of Facebook and OpenSocial applications alone) and learn how they can be used to provide real utility and every day value.

Any list of what is new and important in software architecture must be personal perspective so I invite you to add your own below in comments.

Enterprise cloud computing gathers steam

posted Jun 20, 2010, 7:10 PM by Kuwon Kang

The days when organizations carefully cultivated vast data centers consisting of an endless sea of hardware and software are not over, at least not yet. However, the groundwork for their eventual transformation and downsizing is rapidly being laid in the form of something increasingly known as “cloud computing.” This network-based model for computing promises to move many traditional IT capability out to 3rd party services on the network.

The promise of cloud computing has captured the industry’s imagination this year for two big reasons. The first is the growing realization that cloud computing can successfully be used to strategically cut costs and drive innovation. And the second is that current offerings are getting very close to being ready for prime-time use in enterprise environments.

When Web behemoth Google officially entered the cloud computing arenaback in April of this year, the space became a hot topic in IT circles almost overnight, despite the long history of availability from major vendors such as Amazon and Sun as well as a number of pioneering smaller vendors such as 3Tera and Egenera.

Other major IT players include IBM, Dell, HP, Intel, and Yahoo are all making serious investments in cloud computing research or major infrastructure Om Malik reported this week. ZDNet’s Mary Jo Foley is also tracking Microsoft’s movement in this space with project ‘Midori’.

Why was Google’s entry a signature moment in cloud computing? Most likely because it brought the necessary critical mass to an industry which was growing steadily but had yet to break out into the mainstream. Google has a well-known reputation for globally scalable applications that can reliably service millions of concurrent users while successfully controllingcosts and efficiency in everything from power and bandwidth to storage and processing power. So when they claimed that anyone can now “build scalable web apps on top of Google’s infrastructure” it received considerable attention.

Cloudy IT: Increase efficiency while innovating

The twin challenges of driving the high costs of information technology down while providing innovative new solutions to improve the business are two forces that often come into direct opposition in the modern IT shop. Businesses must keep costs down to stay competitive while at the same time investing in new ideas that will offer compelling new products and services to those same customers.

Cloudsourcing: Using cloud computing to outsource IT resources, capabilties, and operations

These two objectives come into opposition since new spending (on things like R&D) is usually required to successfully innovate while at the same time the pressure is on to provide the same services for less than it cost last year. Companies have come to expect to reap the cost dividend from trends such as Moore’s Law, outsourcing, and year-over-year productivity improvements.

Interestingly, it’s at this very intersection of issues that cloud computing appears especially compelling. By offering easy access to more efficient IT capabilities across computing, storage, and applications while providing direct and immediate access to both external innovation and innovation capability, cloud computing offers an on-demand, scalable, and repeatable resource that can be used the solve two of the major challenges facing IT departments today. We’ll see in a moment how cloud computing can help with these issues in ways that traditional on-premises computing is hard pressed to match.

Aspects of the cloud

Cloud computing significantly changes many aspects of enterprise computing acquisition, operations, and governance, usually though not always for the better. These aspects are:

  • Reduced capital expenditures - Upfront costs are dramatically reduced since the onus of initial computing infrastructure investments rests primarily on the cloud computing provider. Ongoing costs are also lower due to economies of scale and multi-tenancy, which allows access to the lower cost of cloud computing resources even for very infrequent tasks.
  • Low barrier to entry - Because hardware and software does not have to be acquired, installed, provisioned for every need and resources can be tapped on-demand, often in real-time, cloud computing can be as easy as moving existing applications into a hosted data center, although this depends entirely on the architectural model of the cloud computing provider.
  • Multitenancy - Multiple customers share many of the same resources in the cloud computing model. This sharing both distributes cost and enables economies of scale in terms of centralization of resources including real estate, bandwidth, and power. Multitenancy is one of the key enablers of efficiency while at the same time posing certain security issues.
  • Security - In theory, cloud computing can be more secure than do-it-yourself computing since shared costs allow larger overall investment in security processes and infrastructure. However, there remain worries about access and control over an organization’s sensitive data, though to-date the security record of cloud computing has been quite good.
  • Scalability and performance - Cloud computing can provide access to very high levels of scale without enormous costs of traditional infrastructure. Resources don’t have to kept on hand for peaks that then remain dormant much of the time and their costs stranded during valleys. Performance of cloud computing can also be very good since many providers have data centers around the world to keep the processing reasonably close to those accessing it over the network. However the distances between the business and the services in the cloud are usually greater than from a local data center. The resulting latency can frequently be a bit larger than with local resources though often quite acceptable for many applications.
  • Centralization vs. federation - Cloud computing can be centralized such as Amazon and Salesforce or it can be highly distributed using such peer-to-peer capabilities as provided by BitTorrent or Arjuna. Both methods provide access to economies of scale but as Tim O’Reilly observed this week, building on federated computing computing resources often makes more sense than building on a centralized model, despite the former’s rather nascent state at the moment.
  • Service-oriented - Cloud computing is a service delivered over the network, but true service-orientation allows such services to be componentized, pluggable, composable, and loosely coupled. OpenID is a good example of such a service that has a well-defined interface and for which there are many providers in the cloud which are essentially interchangeable. Cloud computing has become increasingly service-oriented, with Amazon probably being the farthest along in maturity and breadth of services. In the end, cloud computing is making the Web truly become a Global SOA.

It should be noted that the Web itself is the largest cloud computing resource in existence. Its millions of highly distributed computing nodes have been the most successful model overall in terms of providing value to 3rd party users, and not the walled-garden network providers of yore such as AOL, Prodigy, or MSN. This is likely a crucial hint as to where the ultimate future of cloud computing will lie, with an increasing emphasis on switchable, federated services and less on proprietary, centralized services. Otherwise cloud computing, though invariably based on open source products, could become the next bastion of commercial, platform lock-in.

Clouds of different colors

It’s difficult to have a discussion of cloud computing these days without talking about Platform-as-a-Service (PaaS). Paas, another hot acronym du jour, is essentially a cloud computing service that has been opened up into a platform that others can build upon, similar to the way that Windows orLAMP are platforms designed to be built upon. Utility computing is another common phrase in cloud computing discussions and primarily focuses on the business model of cloud computing with a “pay for what you use” model that reduces the waste and underutilization of traditional corporate data centers. While these are both important aspects of cloud computing, they don’t completely describe the individual types of cloud computing capability available today.

Within cloud computing itself there are a number of distinct types of services that can be provided and current vendors in the space tend to focus on one specific area or another. It should also be noted that by selecting a cloud computing type and vendor, you are also selecting an architecture. This is a significant decision since the architecture of a cloud computing service will dictate what how it can be used, what standards are supported, the amount of lock-in that is being imposed, and the flexibility, security, performance, and just about every other aspect, including ultimately what it’s possible to do.

Here are some of the types of cloud computing services that are emerging today:

  • Compute Clouds - Amazon’s EC2Google App Engine, and Berkeley’sBOINC are all examples of compute clouds, albeit with very different models. Both of these services allow access to highly scalable, inexpensive, on-demand computing resources that run the code they are provided. Compute clouds are the most general purpose cloud computing services and can be used for a variety of purposes. While enterprises can use any of these services today, they are largely absent the standard management, monitoring, and governance capabilities that large organizations would expect and be familiar with. Amazon does offer enterprise-class support today for their compute cloud and its infrastructure and it’s highly open nature allows anyone to run the infrastructure management pieces they choose. There are also an emerging set of enterprise cloud computing offerings such as Terremark’sEnterprise Cloud that are designed for enterprise use.
  • Cloud Storage - Storage was one of the first major services to appear in the cloud and remains one of the most popular and well-addressed segments in the cloud computing realm. A list of 100 cloud storage services was recently released showing how crowded this market already is. Security and cost are the top issues and vary widely across offerings with Amazon’s S3 being the market leader at present.
  • Cloud Applications - Software applications that rely on infrastructure in the cloud fall into this category. Cloud applications are an off-premises form of Software-as-a-Service (SaaS) and can range from Web apps that are delivered into users entirely via a browser to hybrids like Microsoft Online Services, which is explicitly offloads hosting and IT management into the cloud, and consists of both native and Web clients with application infrastructure hosted elsewhere.

One type of cloud computing tends to defy traditional categorization and that’s harnessing human workers in the cloud, as a service. This is best exemplified by Amazon’s intriguing offering, Mechanical Turk, which plugs thousands of people into its on-demand cloud. This model includes any service which provides a consistent, service-oriented interface over a network to interact with people in a directed, collaborative manner. This is an on-demand form of outsourcing as well a cloud-based form ofcrowdsourcing.

Getting ready for cloud computing in the enterprise

Like so many aspects of Web 2.0, the industry is moving a lot faster than most businesses are currently able to keep up with. Cloud computing, however, may offer such significant and easily accessed economic advantages that it has a good chance of being adopted a bit faster than usual. Particularly as leases come off of data center resources, many IT shops will begin to take a hard look at “cloudsourcing” part of their capabilities and operations in their next round of infrastructure improvements in an incremental fashion.

The first candidate cloud computing pilots will generally be outside of core IT and will be of secondary and tertiary importance to the organization. Forward thinking organizations will begin trying out providers and learning the cloud computing ropes, though certain organizations, like government agencies and others managing extremely vital information, will likely be the last to take the leap. Like any form of outsourcing, fully leveraging the cloud will take some time to get good at as IT departments get clarity around lock-in, security, scalability, reliability, governance, and real-world costs. However, it’s clear that the forecast for enterprise IT is increasingly “cloudy” for the next few years.

Are you using or planning to use cloud computing in your organization this year? Why or why not?

Dion HinchcliffeA veteran of software development, Dion Hinchcliffe has been working for two decades with leading-edge methods to accelerate project schedules and raise the bar for software quality. See his full profile and disclosure of his industry affiliations.

Email Dion Hinchcliffe

Subscribe to Enterprise Web 2.0 via Email alerts or RSS.

The Elements of Social Architecture

posted Jun 20, 2010, 7:08 PM by Kuwon Kang   [ updated Jun 20, 2010, 7:09 PM ]

MARCH 03, 2009

The Elements of Social Architecture


The Elements of Social Architecture

We are pleased to present a shortened and edited excerpt from the second edition of Information Architecture: Blueprints for the Web. –Ed.

Published in 1977, Christopher Alexander’s A Pattern Language: Towns, Buildings, Construction contains the collective wisdom of world cultures on centuries of building human housing. It had a resounding effect not only on architecture and urban planning, but also on software design. In it, Alexander and his co-authors explored 253 architectural design patterns. For example:


Conflict: Children love to be in tiny, cave-like places.

Resolution: Wherever children play, around the house, in the neighborhood, in schools, make small “caves” for them. Tuck these caves away in natural leftover spaces, under stairs, under kitchen counters. Keep the ceiling heights low—2 feet 6 inches to 4 feet—and the entrance tiny.

Every pattern explores ways of designing space to meet human needs and promote happiness. It doesn’t take great imagination to apply these architectural principles to information architecture—taking them out of the real world and into the digital world.

Humans can behave in surprising ways when you bring them together. In an information space, a human’s needs are simple and his behavior straightforward. Find. Read. Save. But once you get a bunch of humans together, communicating and collaborating, you can observe both the madness and the wisdom of crowds. Digg, an online news service in which the top stories are selected by reader votes, is as likely to select an insightful political commentary as it is an illegal crack for a piece of software as their top story. This unpredictability makes architecting social spaces the most challenging work a designer can take on.

While your designs can never control people, they can encourage good behavior and discourage bad behavior. The psychologist Kurt Lewin developed an equation that explains why people do the crazy things they do. Lewin asserts that behavior is a function of a person and his environment: B=f(P,E). You can’t change a person’s nature, but you can design the environment he moves around in. Let’s explore some of Alexander’s patterns I’ve observed in my work and the design work of my fellow practitioners.


Conflict: Who can you trust online?

Resolution: Give each user an identity, and then allow him to customize it as he sees fit. The identity allows the user to express his personality, and is typically accessed and protected via a unique log-in. Participation is rewarded by enhanced reputation and the ability to collect items in the system (bookmarks, history, relationships, and so on).

Identity is the bedrock of social architecture. In the brilliant essay “A Group Is Its Own Worst Enemy,” Clay Shirky writes:

If you were going to build a piece of social software to support large and long-lived groups, what would you design for? The first thing you would design for is handles the user can invest in.


To allow your user to successfully create an identity, you need to provide ways for users to reveal themselves online. Four elements of online identity are:


A profile is a collection of information about the user, typically including a short biography and contextually appropriate facts. Orkut, Google’s foray into social networks, collects and displays gender and marital state prominently. LinkedIn, a business networking site, doesn’t touch any of these bits of information, focusing instead on job history, skill set, and education.

Orkut user profileOrkut user profile.
LinkedIn user profileLinkedIn user profile.


One thing any inviting and vibrant community needs is a sense of life. Presence is a way for a user to express themself and populate the online space. Presence can be a status, history of activity, or location.


On a website, your reputation is equal to the sum of all your past actions on the site, good or bad, where the community defines “good” and “bad.” Since human memory is fallible, and of course, new members or visitors don’t always know the history, reputation systems are built into web software to track behavior and how the community judges it. Amazon’s “Top 500 Reviewer” or eBay’s “Top Seller” designations are great examples of reputation systems.


Conflict: On a website with thousands or millions of people, how do you make sure you can keep track of the people you care about?

Resolution: Create ways for people to identify, connect, and organize the people they care about, as well as the information those people produce. The complexity of relationship classification depends on how your customers will use your website.

Relationships are always present in communities. Online, site software manifests and categorizes relationships according to the community’s needs. It can be simple, such asTwitter’s flat following mode. Twitter is based on the concept that people broadcast to others and subscribe to others’ broadcasts as they would to a magazine. The Twitter system design doesn’t recognize that mutual following might be a proxy for friendship. The entirety of the relationship is, “I’m interested.”

Following on TwitterFollowing on Twitter.

Offering more choices to define the nature of relationships allows the user greater control, but introduces complexity.Flickr offers categories for “friends,” “family,” and “contacts,” which the user applies as they see fit—including marking as friends people who are really family and as family those who are really friends.

When your Flickr contacts grow to 100, “friends” becomes a useful tool to watch some people a little more closely, because your friends appear in the center of your homepage. Additionally, you set viewing permissions based on these distinctions. For example, a college student might show his most intimate photos only to a list of close friends labeled family. He may label actual family members as friends or contacts. Each label has a built-in set of assumptions that may or may not apply to the user’s needs. But having clarity on who sees what, no matter their label, is useful.

Flickr friends and familyFlickr friends and family.


Relationships on the web are just as important as relationships in real life. Three key elements of relationships are:


Give your users a way to classify the humans in their life. It can be as simple as just saying, “I know him,” or “I don’t know him,” or as complex as saying, “We were college roommates, but we haven’t spoken in ten years.”


Groups are another relationship structure, based on shared interests or experiences rather than personal connection. They include alumni groups, work groups, and professional organizations.


While walking into a biker bar wearing a Yamaha tee shirt might get you killed, wandering into a Star Wars forum and saying, “George Lucas ripped off Joseph Campbell, and he didn’t even do it well!” will generate 500 flames an hour. We call those who violate norms “trolls,” and it’s best to prepare for them. Create rules for behavior and consequences for violating the rules such as a time-out or a ban.


Conflict: If there’s nothing to do on a site, then it doesn’t matter if all your friends are there or not. The site has no more interest than an address book, and it won’t get affection or traffic.

Resolution: Create activities that are useful to individuals but are much improved by group participation.

The third major pattern in social software is community activity. This is just like being a party planner: you’ve brought people together, now what? Happily, humans have things they like to do together, and if you get them in the same spot and give them even rudimentary tools, they’ll start talking, sharing, and collaborating.

On 43 things, people share secrets and dreams, and support each other in their shared goalsShared goals on 43 things.


The more things your users can do on your site, the more time and energy they’ll spend there. Some elements of activity are:


Gift giving is a primitive human behavior—it binds us. When one person gives something to another, there’s gratitude, and a desire to reciprocate. In online community settings, where the nature of the medium ensures you retain a copy of any files you give to someone else, gift giving becomes sharing. Sharing gathers people of like interests, and allows for an exchange of ideas. As the community tightens, sharing permits exchanging dreams, hopes, secrets, and fears.


Conversations and communication—that’s the heart and soul of a community. No matter how much software we build, people build the relationships, and they build them out of words first. If you don’t have a place for people to put their words, community devolves into viewership.


Social software was envisioned as a tool to allow work groups to collaborate. While the “social” part may have swept the web, there are still plenty of tools that focus on allowing smaller groups to get things done.

Architecture for Humans

Humans are complex, and the web is dynamic. Many more innovations and patterns of excellence will be defined. Yet human contact and interaction is not new. From A Pattern Language:


Conflict: People are different, and the way they want to place their houses in a neighbourhood is one of the most basic kinds of difference.

Resolution: Make a clear distinction between three kinds of homes―those on quiet backwaters, those on busy streets, and those that are more or less in-between. Make sure that those on quiet backwaters are on twisting paths, and that these houses are themselves physically secluded; make sure that the more public houses are on busy streets with many people passing by all day long and that the houses themselves are exposed to the passers-by. The in-between houses may then be located on the paths halfway between the other two. Give every neighborhood about an equal number of these three kinds of homes.

This pattern decribes the design of a town, but can be applied to social network design.Facebook was initially lambasted over what has become its most popular feature: the news feed. The news feed is the equivalent of the town square where you can see what’s going on with everybody and chat about it. Some people want to live on the town square, so they don’t miss a thing. Others like to live at the edges of town, away from prying eyes and overwhelming updates. Social architecture’s ongoing challenge is to find intelligent and subtle ways of allowing people to choose degrees of publicness (including shelter from other people’s publicness).

If we remember the social in social architecture, we can continue to make new products that delight people as well as change their lives. 

Robust Design & Robust Test

posted Jun 20, 2010, 7:06 PM by Kuwon Kang   [ updated Jun 20, 2010, 7:08 PM ]

삼성생명 차세대 시스템 구축에 있어 현재 Robust Design을 적용하고 있다.

핵심은 Error free implementation과 새로운 design information을 찾고 제품의 품질/신뢰성, 퍼포먼스, 비용을 최적화 하는것이다.

Robust Design구현을 위한 중요한 구현 방법론은 아래와 같다.

Simulator, TDD, Robust Class design, Healthy Class, Independent Class, Test friendly Class, Simple Class, Multi-thread safe class, easy class등등이다.

Robust Design Methodology의 과정을 통해 위에서 언급한 Factor들을 지속적으로 개선하고 최적화 하는 것이다.

1-8 of 8