Benchmarking Best Practices to Improve Product Development. Introduction. In this era of “faster, cheaper and better”, companies are focusing on improving the product development process. New business strategies, new organizational approaches, new business processes and new enabling technology are being used by many forward- thinking companies to continually improve their product development process. How does a company keep up with these fast- paced changes? Some of the improvement opportunities are obvious to personnel within an organization. Other opportunities may not be obvious, or there are so many things to do that it becomes a question of where to start. Management will typically have a number of questions on the their minds: How do we compare with the rest of industry? With the best in industry? What are our strengths and weaknesses? Is our development process aligned with our strategic objectives? What improvements need to be made? What are our priorities given the resources that we have available? What benefits can we expect? How can we figure this out quickly so that we can get started? Assessment. No organization can improve all aspects of product development at once. The implementation of product development best practices can best be viewed as a journey (continuing process improvement) rather than a destination. Priorities need to be developed for implementing the best practices of product development. The organization must start by understanding what practices should be adopted (what is possible). Benchmarking: Finding and Implementing Best Practices. Use ACA’s professional development framework. Benchmarking helps you understand how your organization compares with similar organizations. Technical benchmarking — Performed by design staff to ascertain the capabilities of products or services. Hawaii Energy Benchmarking Program. The benchmarking process compares your building’s performance to industry best practices. Whole Building Assistance Program, energy audits, design and certification assistance and. The objectives of implementing a benchmarking program. To determine who is the best out there in terms of a specific or range of capabilities. COAA Alberta Major Projects Benchmarking Program: What Gets Measured Gets Improved! Who is CII and what's Benchmarking all about? Terms Of Use Login Website Design by Primal Tribe. Next it must consider its strategic direction (e. Next, the organization must assess its strengths and weaknesses. By focusing on the “gap” between where a company is and where it needs to be, priorities can be set for making improvements. Benchmarking for results: How to design a program that works. Best-practice benchmarking focuses on identifying. Benchmarking; Best Practices; Methodology. Best Practices Program Area Reports. This is represented in below. Several years ago, we led a consortium to identify product development and time- to- market best practices. These practices were derived from: corporate visits, consulting assignments, conferences, workshops and meetings, literature review, telephone discussions, technology vendors, the Navy Best Manufacturing Program, the Software Engineering Institute’s (SEI) Capability Maturity Models (CMM), the AT& T Handbook series, and other corporate handbooks. This practices are continually being updated as new best practices emerge and are identified and as current best practices become standard practice and are no longer noteworthy. These practices were organized into a framework with five major dimensions: strategy, organization, process, design optimization and technology and twenty- eight best practice categories (equivalent to Process Areas in CMM Terminology). Designing a Benchmarking Plan 2 DRAFT.In excess of 2. 70 best practices have been identified (see Integrated Product Development Body of Knowledge). These are described in a commercially- available benchmarking tool, the Product Development Best Practices and Assessment (PDBPA) software. This software tool is used to provide an understanding of these best practices, to enable rapid and inexpensive benchmarking, and to support business process improvement. These best practices are organized into the following categories for summarization and reporting purposes: Most of these best practices are universal – they apply to the development of any kind of product in any type and size of company. Some of these best practices are relevant to only certain types of products or business environments. For example, maintainability/serviceability practices don’t apply to consummable products, design for manufacturability isn’t as important with one- off product such as a satellite, practices related to electrical design or embedded software are not relevant to a purely mechanical product, etc. Therefore, an importance weighting is used to tailor the importance of the best practice to each company’s products and business environment. Associated with each of these best practices is a set of questions to aid in this assessment process. A company’s product development activities are evaluated with respect to each of these best practices, and a quantitative rating is developed. This evaluation is supported by a verbal description of the characteristics of the organization’s product development approach as it evolves toward a world class approach to IPD. An example of a worksheet for this evaluation process is as follows: Strategic Alignment. To be successful, an organization must have a a basis for competitive advantage. While an organization needs to do a reasonable job in various competitive dimensions, it can not be all things to all people. The enterprise must focus on one or two dimensions of competition to truly excel and be successful. The following are the competitve dimensions typically associated with product development: Time- to market. Low development cost. Low cost producer/low cost, high value products. Innovation and product performance. Quality, reliability, ease of use, serviceability, etc. Agility. Many best practices are related to one or more of these competive dimensions or strategies. If the practice is strongly related to one of these strategies, it can be described as a strategic lever. For example, strategic levers related to time- to- market include: 2. Undertake a new development project only when resources are available. Overloading projects stretches out projects, delaying time- to- market. Resources can be focused on higher priority projects underway. The next highest priority project can be undertaken when the resources become available to support it as expeditiously as possible. Fully commit to the project and rapidly staff to the plan to get off to a good start. Emphasize design re- use of modules, parts, cores, cells, part models, requirements documents, plans, technical documentation, simulation models, fixtures, tooling, etc. Consider addressing new requirements in the next release or next generation product. Get suppliers involved early to collaborate and utilize their ideas and suggestions and to develop a design that is compatible with their process capabilities. Use product data management systems to both control product data and streamline the process through workflow capabilities. This speeds and controls the flow of information. Use electronic mock- up and assembly modeling capabilities rather than building physical mock- ups. Emphasize early analysis and simulation to minimize build and test cycles with physical hardware. By looking at the level of performance related to these strategic levers, a competitive strategy is implied. The question becomes whether this implied strategy is in agreement with the intended strategy. One way to view the overall implied strategy is to look at the weighted average of the performance ratings for the best practices that are strategic levers associated with each of six competitive dimensions or strategies. A high weighted average performance rating for a particular strategic dimension compared with the weighted average performance ratings in other strategic dimensions suggests that the product development process has been strategically aligned to that strategic dimension. Ideally, the rankings of these weighted average performance ratings should be aligned with the intended strategy priorities. If not, the product development process needs to be improved by applying the best practices that are strategic levers for the desired strategy. Analysis and Improvement. In addition to the performance rating against each best practice and for each higher level category, an overall performance rating is developed by again assigning a weighting factor to each category based on their importance given the nature of the business and the product. This performance rating, when compared to that of other companies, gives an indication of the urgency of improving the development process. Gap analysis is then employed to focus attention on the improvement opportunities that will yield the highest payoff. The categories with high weighting factors (indicating their importance to your product development success) and relatively low performance ratings yield the largest gaps between what is important to the organization and what it does well. These are the areas that require the highest priority in improving the development process and will likely have the largest payoff. On the other hand, categories with low importance ratings and relatively high performance ratings indicate low priority areas not deserving as much attention. The strategic alignment analysis and gap analysis become the basis for identifying implementation actions and priorities. The concept is to pick a manageable number of improvement initiatives to focus your attention on. An example of this performance summary and gap analysis is shown: Once the large gap categories are identified, an examination of the individual best practices with lower performance ratings will help identify the specific areas that require attention. In addition, identify and focus on the strategic levers that have low performance ratings and that are associated with the organization’s intended strategy. Therefore, as a prerequisite, executive management must define a vision for product development and determine the competitive strategy as a basis for for aligning product development practices and developing implementation priorities. This analysis becomes the basis for developing priorities and, eventually, an improvement or implementation plan. In addition, the expertise of an internal manager or outside consultant very knowledgeable in integrated product development concepts and improvement strategies can aid in identifying priorities. This expertise is important because of natural relationships and sequences with the implementation and use of these best practices. For example, moving to a digital product model as a replacement for paper drawings is not realistic until there is a certain level of CAD capability, workstation access to the model, product data management system, and network infrastructure in place. Benchmarking for results: How to design a program that works. Reduced operating costs and improved customer service sound like corporate goals touted by private sector CEOs; however, it is increasingly expected that governments will manage themselves with these outcomes in mind. A well- planned benchmarking program can help local governments streamline their processes, realign their organizational structure, and create a culture that will help them reduce costs and improve services. This article will discuss the benchmarking lifecycle and key best practices that your organization should implement in each stage. Define the purpose of the benchmarking program. Starting the benchmarking process without defining the purpose of your program is comparable to shooting before you aim; chances are slim that you will hit your mark. Governments must clearly define what they are aiming to achieve with benchmarking before they move into the preparation and data collection phase. For example, the scenarios listed below all require vastly different approaches: You want high- level benchmarks to help you identify “problem” areas in your organization. You think one of your departments is not completing projects because they are understaffed, and you want benchmarks to justify adding personnel. You receive complaints from departments that the contract administration process within your organization is too slow and confusing, and you want to determine best practices and potential process improvements in this area. You think the outsourced custodial company you use is charging you too much, but you need data to make your case. The examples differ in the depth of benchmarking required, data collection requirements (internal and external), and benchmarking partner(s) that should be used. It is important that the purpose of the benchmarking program is clear so that the inputs are applicable and the outcomes add value to your organization. Preparation and internal data collection. Depending on the depth of your benchmarking program, you will need a considerable amount of data for the areas you want to study. Once you have decided on the purpose of your benchmarking program, you will need to start collecting internal data that you will use to compare yourself to others. The first step is to refer to the purpose of your benchmarking program and consider the questions you need to answer. For example, if you think you may be overpaying for your custodial contract, you will need to ask questions such as, “How many square feet should each custodian be cleaning?” and “What should my custodial expenditures per square foot be?”. Questions like these will help drive the metrics you will use in your benchmarking program. Due to the increased popularity of performance measurement in government, standard benchmarks are available for most functional areas of government. Internal data collection is one of the most time- consuming aspects of benchmarking, so governments should narrow down the measurements they will use for the benchmarking program before they begin the data collection process. The table below includes some widely used benchmarks in various functional areas. This is a sample of commonly used benchmarks; a review of the standard metrics used in the area you are focusing on is a necessary step in the preparation process. Functional area. Metric. Purpose. Facilities management. Custodial/maintenance expenditures per square foot. Measures if appropriate levels of spend are occurring. Can lead to deeper analysis to determine what is driving costs up (or down). Response time (days) for emergency/non- emergency repair requests. Measures the timeliness of the unit. Information technology. IT staff per employee. Measures staffing level of the IT department. Parks and recreation. Operating expenditures per acre of land managed or maintained. Measures if appropriate levels of spend are occurring. Can lead to deeper analysis to determine what is driving costs up (or down). Acres of parkland maintained per FTE (full- time employee)Measures staffing level of the parks and recreation department. Revenue per visitor. Measures the appropriateness of fees being charged. May be a catalyst for increasing fees. Road maintenance. Street sweeping expenditures per linear mile. Measures the cost effectiveness of the street sweeping program. Pavement Condition Index. Measures the condition of paved streets. Human resources. Employee benefits as a percent of salary and wages. Measures appropriateness of benefits packages. Time to fill open positions (days)Measures the effectiveness of the HR department. Data such as staffing levels or expenditures should be readily available internally, but other data may require additional effort to obtain. For example, national benchmarks for custodial staffing often distinguish between total square footage of buildings and square footage actually cleaned by custodians. This would entail measuring areas such as utilities closets and storage areas and subtracting their square footage from total building square footage. In cases where you are interested in reorganizing a department, implementing policies, or changing processes, metrics may not be appropriate. You may glean more relevant information by using best- practice benchmarking to compare your entity to the operations of similar or best- in- class organizations. For governments that are benchmarking best practices as opposed to metrics, data collection may require detailed mapping of process flows and decision points. For some municipalities, this granular level of data may not be readily available, and you will need to devote significant resources to data collection. The potential return on investment of the benchmarking program should be weighed against the effort and resources needed to collect data. Approach selection. Deciding who to benchmark your organization against can be one of the most challenging aspects of benchmarking. It may also be the most crucial step, especially since most governments use the results of a benchmarking analysis to drive important institutional decisions. The matrix below provides some insight into this decision by listing certain considerations and the most appropriate benchmarking approach. While this list is not exhaustive of all considerations, it touches on those that are most commonly seen in local governments. The three most common sources of information for benchmarking are: National/industry benchmarks – These are typically collected and published by large, reputable industry associations. These may often be referred to as industry standards. Local peer comparable – This is data collected from governments within your region that have similar characteristics such as demographics or organizational structure which influence the benchmark you are trying to measure. Associations or regional groups often conduct local benchmarking surveys and make the data available for use by local governments. Best- in- class comparable – This is data collected from organizations that have won awards or otherwise been recognized for being a high performer in the areas you are benchmarking. Key consideration. National/industry benchmark. Local peer comparable. Best- in- class comparable. The function is highly regulated at the state or county level. X Weather and other environmental conditions affect the outcome of the activity (e. XX (regional)You want to assess potential problem areas across your entire organization. X The function is highly standardized or routine (e. For example, you may be fortunate enough to have a best- in- class comparable that is also a local peer comparable. In the last consideration, where there may be room for improvement in your organization, national and local benchmarks may provide the initial information that will pinpoint areas where improvement is needed. At that point, a best- in- class comparable should be used to determine goals and/or process improvements. When choosing a local comparable, it is important to remember that a true “apples- to- apples” comparison will never be possible. Every government has factors t make it unique. It is important to keep in mind which factors influence the outcome of a benchmark and only try to control for those factors. A city’s population should have no impact on how many staff they need to maintain their sewer system, but the size of their sewer system certainly should be a consideration. Accountability and implementing the results. Once your organization has expended time and money to develop and execute a benchmarking program, follow- through is imperative to ensure analysis is appropriate and that there is application of the results. After choosing your benchmarking approach and collecting the data, refer back to the original purpose of your benchmarking program to guide your next steps. The decision tree below is based on one of the original scenarios presented in this article; “You think one of your departments is not completing projects because they are understaffed and you want benchmarks to justify adding an FTE.”In this case, the government originally attributed the department’s subpar performance to a lack of personnel. If the benchmarks support this assumption, it provides a way to bolster an argument for additional personnel. If the benchmarks do not support this assumption, it becomes clearer that the issue could be the result of inefficient process design, lack of effective technology use, or poor performance by one or more staff. If this is the case, additional benchmarking and best- practice research would be required. Remember, just because the benchmarks may not support your original assumptions does not mean the problem has been solved! Accountability is a key aspect of a benchmarking program. As part of a complete benchmarking program, collected data should be used for several consecutive years to track the progress of the organization in meeting performance expectations. This can help measure the impact of discrete operational changes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2016
Categories |