HP World '99 and ERP World '99 Conference Proceedings

"Time to Market" Concepts
Enable Rapid System Selection
and Implementation

Michael S. Sulaver The Results Group
800 West El Camino Real, Suite 180
Mountain View, CA 94040
tel: 650-234-9979
fax: 650-324-2086
mikes@resultsgroup.com

Executive Summary

By leveraging the modularity of current generation Enterprise Systems, the strategies, concepts, and techniques proven successful in reducing time-to-market in product development can be utilized for rapid selection and implementation of business systems. The profitability and competitive advantages a company establishes are often built on the capabilities of these critical business systems. Enterprise Systems include:

An Enterprise System project structured and managed similarly to the design and initial introduction of a new technology platform with a roadmap of follow-on product extensions can enable a company to select an Enterprise System in 2-3 months and implement it in 2-4 months instead of the typical 12-24 months. Using this approach, a company can be leveraging the benefits of the new Enterprise System within 4-7 months, about the time just system selection takes using the more typical approach.

Instead of a one-time, giant step-function introduction, the Enterprise System selection and implementation project can be approached as a rapid deployment of a new technology platform with follow-on introductions of new incremental modules or features. The objective of the initial implementation is to rapidly bring on-line and stabilize the Enterprise System with roughly the same functionality as the current system, but with architecture that meets future business requirements. Small, knowledgeable cross-functional teams execute the project.

As in a multi-phased product introduction, the robustness of the technology platform's design is critical to the success of the program. In a rapid deployment of an Enterprise System, the same is true for the development of the architecture. This architecture is the utilization of the characteristics of the selected Enterprise System to meet the requirements of the company. The team uses its creativity and vision to develop this infrastructure backbone. The system selection and architecture is managed as product design development, including defining requirements, developing tools to facilitate objective review, developing the design, testing to ensure the design meets the requirements, and performing design reviews. Throughout the selection and implementation process, making quality decisions by using good business judgement and balancing risk is emphasized.

[]

Rapidity is achieved through many means. Concurrency of design, small teams, and clear objectives and processes facilitate the Enterprise System selection and implementation. Properly structuring tools developed early in the selection process for later reuse increases team productivity. The scope of initial implementation minimizes training, testing, documentation, and customization. Initial change is limited. Selective testing to meet specific test objectives is utilized. Common pitfalls of unnecessary complexity and customization, overloaded training and re-engineering efforts, and scope and schedule creep are minimized.

The major principles of this rapid system selection and implementation process are:

Overview

Rapid system selection and implementation (RapidSSI) is a phase focused project approach. The RapidSSI process facilitates fast, quality decision-making and post-implementation system stabilization.

Success is achieved by rigorously maintaining focus on the objectives of implementing a new system rapidly, but with quality decision-making by a small, knowledgeable cross-functional team. Senior management participates as a steering committee, quickly reviewing and approving team deliverables and removing road blocks to the team's continued progress. System supplier activities become part of the critical path.

The development of the system architecture is critical. The architecture must enable the company to utilize the new system to efficiently support its future business environment. Thus, architectural requirements include current and future needs of the company and the characteristics and features of the new system. However, the architecture must also allow phased implementation of additional beneficial system modules and features without abandoning processes established during the system's initial implementation. It also assists in identifying processes that need re-engineering or add-on application code.

The structure, expertise, and size of the project team have a direct impact to the success of the project. The team requires a clear understanding of the implementation schedule objective. It must have the vision, expertise and decision-making capability to quickly accomplish project tasks--develop system requirements, analyze alternative systems and make the final system selection, architect the system, decide which modules and features are to be implemented during the initial phase, and get data converted and users trained. It determines which processes must be re-engineered, or what customized application code has to be created in order to meet the objectives of the project. The team is as small as possible to expedite decision-making, but it must have adequate resources and the skill set to accomplish all required tasks with quality decision-making. The same critical success factors in Product Development apply in Rapid System Selection and Implementation as shown in the figure below:

[]

The major steps of RapidSSI are:

Utilizing RapidSSI, The Results Group has assisted clients in selecting systems within 2-3 months and implementing systems within 2-4 months.

Rapid System Selection

Charter and Staff System Selection Team

The first step of the RapidSSI process is to select a system selection team. The size of the team is dependent upon the scope of the system implementation and the breadth of the team members. While all team members may not later migrate to the system implementation team, for expediency of implementation it is preferable that most of them do. The team consists of:

Team members must have adequate bandwidth to meet an aggressive selection schedule. Typically, each team member needs to allocate an average of 50% his/her time to RapidSSI to achieve the most aggressive schedule. Given the breadth, vision, evaluation, and decision-making skill requirements of team members, these are typically key personnel within the organization whose time is at a premium. However, internal skill sets and/or bandwidth can be supplemented effectively with external resources. External resources can be added to the RapidSSI team to provide additional skill sets including:

External resources can provide bandwidth relief for specific team tasks including:

Develop System Requirements and Evaluation Tools

The primary criteria for system selection are the present and future business requirements of the company. Therefore, it is important to clearly identify these requirements to allow objective evaluation of alternative systems relative to these needs.

In developing the requirements, the team considers:

This requirements document is the basis for objective evaluation of systems and will be used to screen Enterprise Systems. It is a team deliverable which is to be quickly reviewed and approved by senior management.

Based upon the system requirements developed and the functionality of the company's current business system, the team prioritizes the system processes and features to be implemented. Processes are categorized into phases:

This roadmap phase-in document will be utilized when evaluating alternative systems and updated when developing the architecture of the new system.

To move quickly during review of alternative systems, an evaluation tool is necessary to promote objectivity. While the requirements document assists in ensuring systems meet requirements, an evaluation tool enables effective discussion about the characteristics of each system. Evaluation tools range in complexity from a list of questions to a weighted, normalized scoring matrix. The degree of structure and detail of the evaluation tool is determined by the need of the team to gain rapid agreement. Factors evaluated in the tool may include:

If a spreadsheet is used, a soft copy of it is included in RFP's and used by suppliers to respond to the questions, facilitating the team's summarization of responses. This evaluation tool is refined later to incorporate additional differentiating features of system semi-finalists. It is a team deliverable which is to be quickly reviewed and approved by senior management.

By establishing a systems requirements document, roadmap plan, and an evaluation tool, a subset of the team can be tasked with collecting initial information about alternative systems in order to expedite the selection of possible systems.

Evaluate Alternative Systems

Utilizing the previously generated requirements document, the team must first match the system software complexity to the business complexity. Focusing the search within the appropriate complexity tier will increase the effectiveness of the team.

The team (or a subset of the whole team) identifies alternative Enterprise System products and gathers information from the suppliers or 3rd party evaluations and references. These alternatives may be identified and information may be acquired based upon team member experience, on the Internet, or if timely, trade shows.

Trade shows are an effective means of seeing systems demonstrated, talking to people who have experience with the systems (suppliers and users), and getting immediate feedback from suppliers on their competition's product. By going back and forth between booths and discussing competing systems' merits or drawbacks, the team is in a better position to understand strengths and weaknesses of competing systems.

Using information gathered from suppliers and 3rd party sources, and utilizing the evaluation tool, the team narrows the field and qualifies 6-8 suppliers who best meet the requirements. To facilitate review, a subset of the team can use the evaluation tool and make a recommendation to the whole team. The whole team reviews the recommendations and queries key data.

The qualified suppliers are invited to perform an on-site demonstration of their system products. This demonstration should be scheduled for approximately 2-4 hours. (Time for supplier setup may be additional.) Prior to the demonstrations, each supplier is provided sufficient information about the company and its Enterprise System requirements to elicit supplier recommendations for the architecture of its system in the company's current and future environment, and the supplier's availability for implementation within the RapidSSI schedule. This information may also assist the supplier in tailoring its demonstration (both in content and personnel) to emphasize required features and answer questions. This company information packet can include, but is not limited to:

The suppliers can assist the team in defining what information would be helpful for the supplier to prepare for the demonstration.

Demonstrations are best scheduled to occur within a 2-3 day span in order to facilitate comparisons. After each demonstration, the team evaluates the system. (Whether each team member first does their own evaluation and then the team as a group reviews the summarization of all evaluations or the team as a whole evaluates the system as a group depends on the chemistry and availability of the team.) During the demonstrations, suppliers are apprised that fast turnaround of RFP's (Request for Proposal) are required and encouraged to begin preparation to do so. Based upon the team's evaluation, 4-6 suppliers are identified to receive RFP's.

The RFP can be quickly developed by a subset of the team utilizing the previously generated requirements document, company information packet, and evaluation tool. The RFP also includes a system selection and implementation schedule. While developing the RFP, the requirements and evaluation tool are refined based upon the knowledge the team gained during the initial information gathering and on-site demonstrations. These refinements may include altering the significance of a system characteristic or feature, adding differentiating characteristics or features, or eliminating factors that all potential systems equally possess. The RFP also includes a soft copy template of questions to be answered by the suppliers to facilitate the team's summarization of RFP responses.

Upon receipt of supplier responses, each team member receives a copy for his/her review. In parallel, a subset of the team summarizes responses utilizing the structure of the evaluation tool. A second subset of the team uses supplier responses in generating a preliminary financial justification to ensure all finalists will meet budgetary and return on investment requirements.

By using the team's most recent evaluation update (after the on-site demonstrations), new information or variances between the team's current evaluation and the supplier's RFP response can be quickly identified for team review. If the evaluation tool utilizes a scoring methodology, each system's score is a factor in the decision-making, but is not by itself, the decision-maker. The team reviews the summarized responses and selects 2-3 finalist suppliers to do customized demonstrations on-site. (To qualify as a finalist, it is imperative that the team receives the supplier's guarantee that it has the resources and commitment to meet the company's implementation schedule.)

Finalize System Selection

The team makes its final selection of a system based upon the results of the customized demonstration and satisfactory visits to reference sites. An important element to the customized on-site demonstration is the specification for the demonstration. Earlier supplier demonstrations typically used "canned" presentations. The emphasis for the final demonstration between finalists should be two-fold:

Thus, the team must identify what it wants to see demonstrated in order to make a knowledgeable final decision between finalists.

The team (or a subset of the team) develops a specification of the environment to model (which primary system model, processes, and features are to be demonstrated), data, and a script for the supplier to use to demonstrate differentiating features. Data provided to the supplier to seed the environment may include representative BOM's, item masters, purchase orders, sales orders, customers, and suppliers.

The script includes cases or situations that are common within the company's business environment that the team wants demonstrated. For example, if the company expects to utilize production orders to schedule activity and pick parts and it is in an environment with engineering changes that occur frequently and are implemented immediately into work in process, the script may include generating new production orders and picking their respective materials, then modifying the same bills of materials, changing released production orders accordingly, and reissuing materials. If a company has multiple engineering projects that share resources and parts, the script may include making updates and then reviewing reports or queries to determine the availability or status. While the script is to be provided to the suppliers to assist in their preparation for the final demonstrations, details of the script may be kept hidden until the actual demonstration to mimic real world uncertainties.

In order to facilitate final evaluation, it is best if final demonstrations are scheduled one per day on consecutive days with 4-6 hours allocated per demonstration. At least one hour of this time is allocated at the end of the demonstration for questions and answers from both supplier and team. It is imperative that the supplier guarantees that it has the resources and commitment to meet the company's implementation schedule. The team meets and debriefs each day after a supplier's demonstration to capture immediate feedback, make necessary modifications to the script (especially important after the first demonstration), and make necessary modifications to the meeting agenda.

Upon completion of all finalist demonstrations, the team meets to identify its preferred system and supplier. The evaluation tool is used to provide a structured method for effective review about the characteristics of each system and their importance relative to the company's environment. Upon reaching consensus, the team quickly contacts the supplier to arrange for the team's visit to 1-2 reference sites to corroborate the supplier's claims prior to contract signing.

While checking references, the team gets senior management's final approval. The analysis supporting the team's recommendation includes the system requirements document, final results of the evaluation tool, the team's final qualitative assessment about the preferred system, the supplier's proposal and schedule, and a completed financial justification.

Upon receipt of management approval, a subset of the team and appropriate company personnel negotiate the contract between the company and the system supplier. A key element of this negotiation is the agreement of the implementation schedule, including defining the resources to be provided by the supplier and the company. This negotiation occurs in parallel with reference site visits, contingent upon satisfactory references.

Enterprise System Selection-Most Aggressive Plan

[]

Rapid System Implementation

Charter and Staff System Implementation Team

The first step of the RapidSSI implementation process is to select a system implementation team. As in system selection, the size of the team is dependent upon the scope of the system implementation and the breadth of the team members. To facilitate implementation, it is preferable that selection team members who have the skill set necessary for implementation continue with implementation. The team consists of:

After development of system architecture, participation within the team is expanded to facilitate rapid implementation. Added to the team:

As in selection, implementation team members are typically key personnel within the company. Bandwidth and skill sets are crucial to the success of the project. External resources can be added to the RapidSSI team to provide additional skill sets including:

External resources can provide bandwidth relief for specific team tasks including:

Establish Implementation Project Guidelines

The primary objective for system implementation is achieving an aggressive implementation date, providing roughly equivalent business capabilities of the current system with a minimum of changes to current processes or to the new Enterprise System. Success is achieved by rigorously maintaining focus on this objective. However, during the process of architecting or testing the selected system to meet the company's current and future needs, tradeoffs may have to be made to meet this objective.

To ensure the primary objective is met, the team establishes implementation project guidelines. These guidelines will assist the team in maintaining its focus. These guidelines include, but are not limited to:

The team confirms the modules, processes, and features that are to be initially implemented (Phase 1). Input to the team includes the roadmap phase in document generated by the system selection team and the agreed-to implementation schedule. The roadmap should be updated to meet the schedule and system objectives of Phase 1. As the implementation process continues, the team redefines what modules, processes, and features are in Phase 1, and what is deferred to Phase 2 or Phase 3.

To maintain Phase 1 objectives, but allow a process for prudent change, senior management acting as a steering committee quickly reviews and approves changes that are outside the established guidelines. In addition, this steering committee continues to monitor progress of the team to schedule and addresses impediments to the team's progress.

Develop and Test System Architecture

The first step in developing the system architecture is for the team to receive introductory training on the modules, processes, and features included in Phase 1. The System Expert provides this training. It is focused, typically a subset of standard comprehensive "teach all the capabilities" supplier-provided training. Its purpose is to teach the team about what the system can do. Its emphasis is on core processes that have a high probability of being used in Phase 1. By narrowing the scope of the training, the duration of the training is reduced. (The risk that a beneficial feature will not be introduced to the team during this initial training is mitigated by the continued participation of the System Expert in the development of the architecture.)

System Capabilities
Vision of Business

Using the requirements document, expected company growth and future business environment, and experience, the team outlines a vision of the current and future business. The team then outlines at a high level what the system can do which will support the vision of current and future needs of the business. To facilitate development, a subset of the team can draft an architecture that can then be thoroughly reviewed by the whole team. The architecture should support the primary, and if applicable, secondary business models (i.e. ship to order, assemble/configure to order, build to order, engineer to order) and probable business processes (e.g. outsourcing, flow manufacturing, lot traceability, revision and configuration control, and availability to promise order administration). Block diagrams indicating linkages between modules are developed, representing transactions or information flow. Volume and/or frequency of flow are graphically represented.

During the team review of the draft architecture, the team evaluates the robustness of the architecture. Will it meet the current and expected future needs of the company, but contain flexibility to respond to changes? Does it allow a means of implementing desired system features in the future without significant re-engineering of processes re-engineered during Phase 1? Do any of the linkages within the architecture represent "non-standard" links or uses? Team review generates an approved architecture to be validated in testing and a list of issues that are to be answered by the validation testing.

The objective of the validation testing of the architecture is not to verify that the application works, but how it works within the architecture. Validation testing is attempting to identify "show-stoppers" for the architecture. To minimize the time required for validation testing, the team or subteam generates a script that efficiently tests the architecture based upon the issues and linkages identified by the team during its review. Decisions incorporated within the architecture that affect system tailoring or are difficult to change after implementation are adequately tested to confirm that they are the correct decision. Any "non-standard" linkages or uses of the application are tested in order to confirm that they work as expected with no system "side effects." The needs for workarounds are identified, and the workaround concept tested. During this testing, the System Expert reviews the proposed architecture and validation test results, and researches "non-standard" uses of the application to identify potential side effects of the "non-standard" use.

Validation testing also provides an opportunity to do limited training with core team members and key users. The testing is either done with all core team members and selected key users present as each appropriate function sequentially follows the script (maximizing cross-functional training) or each core team member and selected key user participating as needed. (An efficient script may be able to sequentially batch activities so that many functions are not required to have frequent activities throughout the testing.)

Develop Application Installation and Data Migration Process and Plan

Rapid implementation requires efficient application installation and accurate data migration. Thus, developing an efficient process to install the application, migrate the data, and review the data after migration is important in saving time and increasing the probability of success at the time of going live. The development of this process evolves through the various stages of system testing, starting with the loading the application in an environment for initial training, and continuing through the comprehensive "dress rehearsal." The System Expert and IS team members lead this process development.

The team reviews the data to be migrated and determines the most rapid and efficient method to get this data loaded on the new system. The quality of data migration is critical, both in its design (mapping fields from the old system to the correct fields of the new system) and accuracy (correctly loading the data). The majority of the data is manipulated or transferred electronically, using written conversion routines. However, it may be more efficient to load some data manually, then review it with a verification procedure. Some data is not be loaded at all, but archived outside the new system.

The team utilizes Phase 1 objectives and guidelines to determine what data is critical to Phase 1. Data that is desired to be on the new system but is not critical for Phase 1, such as last year's purchase orders, can be deferred until after Phase 1. Minimizing the amount of data to migrate reduces the time of implementation. In planning for data migration, the team initiates actions within functional departments to reduce the data migration effort, such as minimizing the number of open orders to convert (e.g. schedule orders to close completely prior to the date of new system implementation, defer entering new orders until after implementation). Obsolete data is deleted from the current system prior to system implementation. (Deleting obsolete data reduces the size of the database to migrate and may also simplify electronic data migration by leaving the remaining data more uniform in format.)

Each team member determines an effective method to check for successful data migration. This methodology is to be used during dress rehearsal and implementation. The level of review is consistent with the criticality of the data and the level of risk of data corruption. Objects with significant multiple process impact such as item master data or BOM's must be adequately reviewed. Identifying current system and new system reports that can be used to easily crosscheck data will save time and resources. Sampling plans may include 100% data field-by-data field comparison for a sample of objects (e.g. 100% comparison of data for a sample of BOM's) or a sample of data fields from every object (e.g. random data fields compared from every open sales order). Comparing "processed" data may facilitate data verification. For example, a summary inventory report may exist which contains the dollar value, number of line items, and a number of units within a warehouse, segmented by ABC code. By comparing this data with similar data from a current system report, accurate data migration may be verified without a 100% line-by-line comparison. (However, supplementing this summary data comparison with a sampling plan may be warranted.) Other examples of "processed" data include comparing "Where Used" reports, open order reports, or MRP recommendations.

Develop Desktop Procedures and Train Key Users

After successful validation testing of the architecture, the team confirms which processes must be re-engineered in Phase 1 per Phase 1 objectives and guidelines. Appropriate team members re-engineer each of these processes to optimally interact with the new system. The whole team reviews these re-engineered processes to ensure they are effective, fit within the system architecture, and meet Phase 1 objectives and guidelines.

Key users who did not participate in the validation testing of the architecture are introduced to the architecture. In addition, core team members and key users continue their hands-on training of the system's functionality using the environment created for the testing of the architecture. All ongoing processes to be used in Phase 1 are covered. Emphasis is on understanding the detailed requirements to execute the process within the company's environment. The team, the System Experts, or the supplier's system support (e.g. user guide, on-line FAQ, Help Line) addresses questions and issues that arise during this training. Phase 1 issues are acted upon immediately. Issues that can be deferred to later phases are recorded for future consideration.

During this hands-on training, core team members and key users develop focused desktop procedures that will be used by appropriate system users upon implementation. These procedures are written at the highest level possible that is sufficient for trained users to successfully execute necessary actions. Included in these procedures are appropriate steps to fix probable data entry or operational errors. The objective of these desktop procedures is to provide to users a standard data entry procedure for each ongoing process to minimize errors and variance in methodology. (This will aid in troubleshooting once the system is implemented.)

Just prior to implementation, key users utilize the desktop procedures and the comprehensive "dress rehearsal" test environment to train all appropriate users. Users are trained just prior to implementation to ensure training of the final version of each procedure and to minimize memory loss of the new procedures. Users are instructed that they are only authorized to perform the tasks for which they have been trained, utilizing the procedures they have been given. If they run into a situation that they have not been trained, they are instructed to stop and immediately escalate it to their key user. (This will reduce errors after implementation and aid in troubleshooting.)

Perform Comprehensive "Dress Rehearsal" Final Test

"Dress Rehearsal" is the final test prior to implementation. The actual implementation sequence, all developed procedures, and tools are tested to prove out the implementation process. To facilitate the testing of operational activities, either a script of business activities or a time-compressed simulation of actual business activities is used. If a simulation is used, the team can compare actual end-of-day results from the current system to simulated end-of-day results of the same activities on the new system in a separate test environment. In addition, successfully running an MRP with a master schedule that includes a wide variety of upper-level assemblies assists in validating the format and/or completeness of the item master, supplier, Bill of Material, and inventory data.

In a new environment, the application is installed and data automatically migrated per the procedures developed. For data that is to be migrated manually, a sufficient sample of each object (e.g. customer profile, purchase order) is entered to provide sufficient testing of the manual data entry and verification process and to support the script or simulation. The methods developed to verify data migration are utilized. Differences in data are immediately escalated to the team and reconciled.

If a simulation of actual activities is used, a snapshot of the current system's database is taken and migrated to the new environment. Data is collected during the following consecutive 2-4 day period after the snapshot. This is the data that is used to simulate operations. In preparation for this simulation, users in each function collect and copy each day of the 2-4 day period all actual transactions that they performed. At the end of each day, team members (or IS) captures and retains key reports. These copied transactions are then organized to be entered rapidly. The retained end-of-day reports are later used as a comparison between the actual end-of-day state of the current system and the same simulated end-of-day state from the new system. With sufficient planning (e.g. sequencing activities so all the purchase orders placed on a day are entered first, followed by all inventory receipts, then all material issues, then all the assembly receipts, and finally all shipments) and adequate resources, 2-4 days of transactions and comparison of end-of-day results can be compressed into a weekend. The team reconciles differences between end-of-day actual and simulated values.

To improve the efficiency of the "Dress Rehearsal," it may be productive to temporarily supplement the team with additional personnel. Additional personnel can do the majority of manual data migration and data review, allowing team members to focus on executing the transactions of the script or simulation, reconciling variances, and addressing other test issues. It is better to error on the side of having too many resources in order to facilitate this final test.

The team reviews the results of the "Dress Rehearsal," identifies necessary actions to maintain Phase 1 objectives, and reviews the project's status versus schedule and the team's recommendations with senior management.

Establish Ongoing Support Process

It is imperative that a responsive, effective process for ongoing support be developed and publicized to all users prior to the implementation of the new system. An effective means of providing this support is by a two-tier approach: key users at the first level, backed up by a Quick Response Team (QRT).

Given the complexity and integration of the modern Enterprise System, support must be comprehensive and cross-functional. Even with extensive training, not all situations will be addressed. Operator errors will occur. Minor data migration errors may be found. Unexpected error messages may appear. The result of a transaction will not be what is expected. A transaction in one function or module will have an unexpected impact in another function or module. To keep the new system running with accurate data, users need to have a means of getting immediate assistance to address their questions or issues.

The objective of the QRT is to quickly provide answers to users and ensure the answers and supporting documentation are communicated to the appropriate users within the company. It addresses Phase 1 issues of "how do I do this& " "how do I fix this& " or "this is not doing what I wanted/expected& ." The QRT records each escalated incident. It prioritizes the impact of incidents and identifies and assigns appropriate resources to quickly address them. It reviews progress on resolving issues. It ensures solutions have been adequately reviewed by all affected functions. In addition, it monitors the rate of Phase 1 stabilization by tracking trends in the rate of opening and closing incidents.

The QRT consists of a subset of the implementation team. After implementation, the implementation team phases out while the QRT continues to exist until the new system stabilizes. Each member is either a knowledgeable about how the system works or an IS representative. The QRT also includes the person who has the ongoing responsibility to interface with the system's technical customer support on an ongoing basis. At the time of "going live," the QRT members must have high availability to provide immediate support to users.

The QRT meets on a scheduled basis. Initially it meets 2-4 times per day. As the rate of new incidents reduces, the frequency of meetings reduces until the system stabilizes and the QRT is disbanded. The agenda for QRT meetings includes the status review of all open in-process QRT incidents, re-prioritizing all open incidents and reassigning resources as necessary.

Several tools are required to improve the effectiveness of the QRT. A test environment, refreshed periodically from the live company database is required to enable issue duplication and resolution. (Initially, this can be the "Dress Rehearsal" environment.) A log to record and track all escalated incidents is required. This log will be maintained by key users and the QRT and includes the details of the incident, the date the incident was escalated to the QRT, the priority and resource assigned to the incident, as well as the status of the incident. Additionally, the QRT must have ready access to system technical support.

The responsiveness and quality of ongoing support has a direct impact on the rate of new system stabilization. The faster correct answers and procedures are provided to all users, the faster errors are eliminated. The higher the quality of answers (i.e. validating that the response maintains data integrity throughout the integrated system), the lower the number of errors transferred from one function to another. In order for users to continue to only do the activities for which they have procedures (and therefore reduce the number of variations of incorrect procedures used), ongoing support must be extremely effective.

Implementation Cut-In

The implementation team reviews its decision for the live cut-in of the new system with senior management. The team bases its decision upon:

Utilizing the procedures and personnel honed in the "Dress Rehearsal," the new system is cut in over a long weekend. Additional temporary resources can be used to speed the manual data entry and data verification.

After the implementation team is satisfied that cut-in has been successful, users are notified and transactions begin to be entered. After 1-2 days of successful operation, the data entry of orders deferred from before implementation can occur.

The process of RapidSSI enables rapid, quality decision-making. However, several additional actions can reduce the risk of catastrophic interruptions to the company's business. The time horizon planned on the last MRP on the old system can be extended, providing a back up to the inability to run an MRP on the new system. Additional inventory can be acquired (on site or at suppliers) prior to system implementation. The old business system can be maintained in stand-by mode for a short period of time after cut-in of the new system.

Enterprise System Implementation-Most Aggressive Plan

[]

Conclusion

By utilizing the principles and steps of RapidSSI, it is possible to begin to reap the benefits of a new Enterprise System within 4-7 months and have a platform and architecture that is the basis for additional efficiency and profitability. System selection can be accomplished in 2-3 months, system implementation in 2-4 months. Rapid system selection and implementation is achieved through a highly focused, disciplined process performed by select teams of skilled cross-functional personnel. Internal resources are supplemented with external resources to ensure needed bandwidth and skills. Objectives are clear. Project guidelines are established. Tools are developed. Deliverables are defined. Senior management maintains involvement by efficiently reviewing deliverables and progress, and removing barriers to the team's success.

The underlying principles of RapidSSI are:

The major steps of RapidSSI are:

Author | Title | Track