Managing Records: Electronic Records: Managing GIS Records: GIS Development Guides
Pilot Studies and Benchmark Tests
- Introduction
- Pilot Study: Proving the Concept
- Executing the Pilot Study
- Evaluating the Pilot Study
- Benchmark Tests: Competitive Evaluation
- Glossary
Prior to making a commitment to a new technology like GIS, it is important to consider testing concepts and physical designs for development of such a system within a local government. This can be done by performing a pilot study to determine if GIS can be useful in the daily conduct of business and, if so, further conducting a benchmark test to determine the best hardware and software combination to meet specific needs.
Numerous GIS pilot studies and benchmark tests have been conducted by local governments within the state and across the nation. Decisions on deployment of GIS should not be based solely on other experience. Managers and end users respond best to relevant local data and actual applications, and will learn more readily if they have first hand experience defining and conducting a pilot study on benchmark test in-house.
2. PILOT STUDY: PROVING THE CONCEPT
Planning a Pilot Study
A pilot study provides the opportunity for a local government to evaluate the feasibility of integrating a GIS into the day-to-day functions of its' operating units. Implementing GIS is a major undertaking. A pilot study provides a limited but useful insight into what it will take to implement GIS within the organization. Proving the concept, measuring performance, and uncovering problems during a pilot study, which runs concurrent with detailed system planning, database planning, and design, is more beneficial than pressing forward with implementation without this knowledge.
To maximize the usefulness of the pilot study, it must be planned and designed to match the organizations work flow, functions, and goals as described in the GIS needs assessment. The pilot study will be successful if it has the support and involvement of upper management and staff from the outset. This involvement will provide the opportunity to evaluate management and staff ability to learn and adopt new technology.
Objectives of a Pilot Study
A pilot study is a focused test to prove the utility of GIS within a local government. It is not a full GIS implementation nor is it simply a GIS demonstration; but rather a test of how GIS can be deployed within an organization to improve operations. It is the platform for testing preliminary design assumptions, data conversion strategies, and system applications. A properly planned and executed pilot study should:
- create a sample of the database
- test the quality of source documents
- test applications
- test data management and maintenance procedures
- estimate data volumes
- estimate costs for data conversion
- estimate costs for staff training
The pilot study should be limited to a small number of departments or GIS functions and a small geographic area. The pilot study should be application or function driven. Even though data conversion will take a major portion of the pilot study development time, it is the use of the data that is important. What the GIS can do with the data proves the functionality and feasibility of GIS in local government. The Needs Assessment document has identified applications, data required, sources of data, etc. In addition, a conceptual database design has been previously developed. Following is a list of procedures for carrying out a pilot study:
- select applications from needs assessment
- determine study area
- review conceptual database design
- determine conversion strategy
- develop physical database design
- procure conversion services and develop conversion work plan
- commence source preparation and scrubbing
- develop acceptance criteria and qc plan
- develop data management and maintenance procedures
- test application
- evaluate and quantify results
- prepare cost estimates
Selecting Applications to Include
Care must be taken to select a variety of applications appropriate to test the functional capabilities of GIS and the entire database structure. A review of the Needs Assessment report should provide selective applications to meet these requirements. Make sure to include data administration applications along with end user/operations applications. Data loading, backups, editing and QC routines have little user appeal, but they represent important functions that the organization will rely on daily to update and maintain the GIS database.
Data to be tested in the pilot study can either be purchased from external sources or converted from in-house maps, photos, drawings, documents and databases. In any event, the data should represent the full mix and range of data expected to be included with the final database. It should include samples of archived or legacy system records and documents if they are planned to be included in the GIS in the future. All potential data types and formats should be considered for the pilot. This is the chance to test the whole process of integrating and managing data, together with the utility of the data in a GIS environment and different conversion and compression methods, before final decisions are made.
Spatial Extent of the Pilot Study
Selection of the study area should address several issues:
- Data density
- Representative sampling
- Seamless vs. sheet-wise conversion or storage
Choose an area (or areas) of interest that represents the range of data density and complexity. Make sure that all data entities to be tested exist in the area of interest. This will provide a representative dataset and allow the extrapolation of data volumes and conversion costs for the range of data over the entire conversion area.
To measure hardware performance the selected area should be chosen to match the file or map sheet size the end user will normally work with. Be aware that even if the data is currently represented as single map sheets at a variety of scales, the GIS will store the data as a "seamless" dataset.
Preliminary Data Conversion Specifications
A set of data conversion specifications need to be defined for each of the required data layers in the test datasets. The conversion specs need to address....
- Accuracy
- Reliability
- Coverage
- Convenience
- Completeness
- Condition
- Timeliness
- Readability
- Correctness
- Precedence
- Credibility
- Maintainability
- Validity
- Metadata
The foundation of the GIS is derived from the conversion process which creates a topologically correct spatial database.
Selecting GIS Hardware and Software
To provide for continuity and to minimize added expense for total system development, select the most likely choice of hardware and software based on the database design specifications, and purchase or borrow that necessary for the pilot study from the hardware and software vendors.
Selecting a Data Conversion Vendor
Even though this is only a pilot study, it also serves as a test of likely suppliers of hardware, software and data conversion services. Therefore, a respectable data conversion vendor should be selected to perform the work, and prior uses of the vendor services should be contacted to confirm their ability to meet expectations. It shouldn't matter what method the conversion vendor uses to convert the data. Be open to suggestions from the potential conversion vendors as to the most cost effective methods to convert the data. As long as you get the data in the correct and usable format to satisfy your database plans, the method for data conversion used should not be an issue. However, you will get much better results if the vendor has first hand experience with the chosen GIS software and the data conversion takes place in the same GIS software package. There is always a chance of losing attributes or inheriting coordinating precision errors converting from one format to another.
Defining Criteria for Evaluating the Pilot Study
The pilot study performance must be evaluated in measurable terms. By its very name, a pilot study implies an initial investigation. An investigation implies a set of questions to ask and a set of answers to achieve. For clarity, the questions can be addressed to match the major component of GIS plus others as needed.
Database
- Were adequate source documents available and was their quality sufficient?
- How much effort was involved in "scrubbing" the data before conversion?
- How long did the conversion process take?
- Were there any problems or setbacks?
- Was supplemental data purchased, if so, what was the cost?
- Did the data model work for each layer as defined?
- Was the data adequate (i.e. all data elements populated)?
- What errors were found in the data (closure, connectivity, accuracy, completeness, etc.)
Applications
- Were the applications written as specified
- Did the applications fit smoothly in the GIS or was a separate process invoked?
- Are the required functions built into the GIS or will applications need to be developed?
- Is the GIS customizable?
- How responsive and knowledgeable is the software developer's technical support staff?
- Were expectations met?
Management and Maintenance Procedures
- How will the data be updated, managed, and maintained in the future?
- Have all those who will contribute to the updating and maintenance been identified?
- Have data management and administration applications been developed and tested?
- Have data accuracy and security issues been addressed?
- Who will have permission to read, write, and otherwise access data?
- How will using GIS change information flow and work flow in the organization?
Costs
- How large a database will be created?
- What will be the required level of existing staff commitment during the data preparation and GIS construction process?
- What will be the cost for data conversion of in-house documents?
- What will be the cost for obtaining supplemental data from outside sources?
- How will GIS impact or interface with existing hardware and software?
- What new hardware, software and peripheral equipment is required?
- How much training of staff is required?
- Will additional staff with distinct GIS programming and analysis capabilities be required?
Data Preparation (Scrubbing) and Delivery
Document preparation of source data representing the entire range of data to be included in the database must be completed before the conversion contractor can begin work. Data preparation includes improving the clarity of data for people outside the organization who are unfamiliar with internal practices. This pre-conversion process is referred to a "scrubbing."
Scrubbing is used to identify and highlight features on maps that will be converted to a digital format. The process provides a unique opportunity to review or research the source and quality of the documents and data being used for conversion.
Scrubbing is generally an internal process, but may also be performed by the conversion vendor. The conversion vendor will need to be trained on how to read your maps or drawings. The first map (or all maps) may need to be marked with highlighter pens and an attached symbol key to define what features need to be collected.
At the same time the maps are marked-up, coding sheets are filled out with the attributes of the features to be captured and a unique id number is assigned to both the feature and the coding sheet to create a relate key. This key is critical to connecting the attribute records to the correct map feature defined in Database Design.
The best key is a dumb, unique, sequential number that has no significance. The key should never be intelligent, that is contain other information. The key should never be a value that has meaning, or has the potential of changing. Don't use address, or map sheet number or XY coordinates or date installed. These values are very important and should each have their own field in the database. Do not use them as the primary key. The reason is very simple. If you use a smart key like SBL number and you have to change the number, you run the risk of losing the connection to all other related tables that key on the SBL number. Make the change and the records no longer match. However, if the key is unique and has no meaning it will never have to be changed. Street names change, numbers get transposed, features are discovered to be on the wrong map sheet or at the wrong XY coordinates. If any corrections need to be made, a large defensive programming effort must be in-place to guarantee the integrity of the intelligent key. Avoid the grief and use a dumb, unique key.
Coding sheets are only required if the attributes of the features are not readily available from the map document. For example, if all the required attributes for a feature are shown as annotations on the map (e.g. the size, material and slope for a sanitary sewer line), then a coding sheet is unnecessary. If additional research is required to find the installation date, contractor name, flow modeling parameters or video inspection survey, then a coding sheet needs to filled out for each feature. Again it is critical to create and maintain a unique key between the map feature and the attribute data on the coding sheet.
Once the data has been prepared for conversion, make copies of everything being sent out and make an inventory of the maps, coding sheets, photos, etc. that will be sent to the vendor. Ask the vendor to perform an inventory check on the receiving end to verify a complete shipment arrived.
Change management is essential. If the manual maps or data will be continually updated in-house during the conversion process, keep careful records about what maps and or features have changed since the maps have been sent out. This is an important process that needs to be fully in-place if the pilot study leads to a full GIS implementation.
When and Where to Set Up the Pilot Study
Expect the pilot study to have an impact on daily work. Choose participants where the pilot will not have a negative impact on the daily workload. Even if the GIS is to assist a mission critical process like E911, conduct the pilot as a parallel effort, don't expect it to replace an existing system. At the same time try to make the GIS a part of the daily workflow to test the integration potential.
To ensure some level of success of the pilot study, choose willing participants to act as the test bed/ pilot study group. Make sure they understand the impact the pilot will have on the organization and the level of commitment from the staff members. Use educational seminars to inform the employees about GIS technology and the purpose of the pilot study. Communicate very clearly what the objectives of the pilot study will be, what functions and datasets will be tested and which questions will be investigated. Describe the required feedback and the use of questionnaires or checklists that will be used. Above all else, communicate to keep staff informed and to control expectations.
Who Should Participate
A team representing a cross-section including managers, supervision, and operations staff should be assembled for the pilot study. Choose the staff carefully to assure objective and thoughtful system evaluation. If possible, choose the same people that were involved in the needs assessment process. They will be more aware of GIS technology and may be eager to see the project move forward.
Testing and Evaluation Period
Have a pilot team kickoff meeting with the conversion / software / hardware vendors present. Restate the objectives of the pilot study and responsibilities of each party. Review Needs Assessment, database design documents and assess training requirements. Define communication protocol guidelines if necessary to keep key players communicating and resolving problems.
Before the data arrives, install the software and or hardware in the target department. Conduct user training to familiarize employees with the use of the GIS software. If employees are unfamiliar with computers, allow more time for training and familiarization.
Once the data has been converted and delivered, have the conversion vendor or the software vendor load the data on the target machines. Be sure that this step and all preparatory efforts are monitored and treated as a learning process for your staff.
Begin a thorough investigation of the capabilities and limitations of the hardware and software. Keep user and vendor defined checklists beside the machines at all times. Have each user log their observations and impressions with each session. Make sure to note any change in performance as a function of time of day or workload. Also note if the user's level of comfort has increased with time spent using the software.
Log all calls to the data conversion, software and hardware vendors. Note the knowledge and skill of the call takers, responsiveness and turn-around time from initial call to problem resolution. Some problems may be addressed on the phone, others may take days. If the call cannot be handled immediately, ask the outside technical support person for an estimated time.
Obtaining Feedback From Participants
It is imperative that all individuals involved in the pilot study provide input before, during and after the pilot study is complete. The best method to guarantee feedback from the participants is to have them help formulate the objectives of the pilot, the questionnaires and checklists. Sample questions to address were listed earlier in this document. Augment these with questions from your own staff. Some questions can be answered with a yes/no checklist, some answers will be a dollar figure, and some will require a scoring system to rate aspects of the system performance from satisfactory to poor or unacceptable. Other issues that may effect information flow, traditional procedures and work tasks will require participants to write essay questions or draw sketches of changes they would like to see in the user interface or in the map display. All responses should be compiled in such a way that the responses can be measured and rated numerically.
What Information Should Be Derived From the Pilot Study
The first question to be addressed is whether the pilot study was a success. Success doesn't necessarily mean that the process went without a hitch. A successful pilot study can be fraught with problems and GIS can be rejected as a technology for the organization. The success of the pilot study should be measured by whether the goals and objectives defined for the pilot were achieved. Most issues listed below were covered in earlier portions of the document, but are summarized again.
Data Specific Issues
Many issues to be assessed in the pilot study are data specific and are related to data quality, volumes and conversion efforts.
Source Document Quality
Most first time GIS users are so awestruck by seeing their maps on the computer screen or on colorful hard copy plots that they overlook the importance of reviewing the quality and usefulness of the source documents and the utility of the final product. Many original maps are so old and faded, that they are unusable as a source document to create a GIS dataset. Some municipal agencies have scrapped the existing maps and re-surveyed the entire town's street and utility infrastructure. This is not a cheap alternative, but digitizing bad maps is not a good investment.
Quality Control Needs
There is a danger present in any data conversion project (even for a pilot study) that the vendor will perform the conversion and deliver the data to the client without an adequate Quality Control process in place. If the client is new to GIS, they may not be able to determine if all the data is present, if the data is layered correctly or if all attributes are populated.
Because a GIS looks at map features as spatially related, connected or closed features, GIS query and display functions can be used to identify features that are in error. By displaying each map layer one at a time using the attributes of the features, item values that are out of range (blank, zero, or extreme values) will show up graphically on the maps in different colors or symbol patterns. Erroneous values should be reported to the conversion vendor immediately for resolution.
The client may consider using a third party GIS consulting firm to review the quality of the data and verify the map accuracy.
Data Availability
Before an attribute field is added to a coding sheet as a target for data capture, be sure the value is readily available and has importance to the operation of the agency. Many data fields would be nice to have, but may not be cost effective. For example, a sidewalk and driveway inventory for a community would be a useful data layer to capture. However, if there are no existing maps showing sidewalk locations, using aerial photos and photogrammetry is a costly approach to capture sidewalks and driveways. A cheaper alternative may be to create two single digit fields in the street centerline attribute table to hold flags indicating the presence or absence of sidewalks on the left or right side of the street. An operator looking at the GIS screen and air photos can assign the values to the flags without a large amount of effort. Based on these values, different line styles or colors can be used to symbolize the presence of sidewalks in a screen display or hardcopy maps.
Pre-conversion Editing
Be sure to track and review the number of man hours and problems encountered during the pre-conversion scrubbing effort. These steps will undoubtedly be performed again during the full conversion and now is the time to assess the impact on the organization.
Data Volumes
Data volumes and disk space is an important issue to evaluate in the pilot study. The pilot by design covers a small area of interest. Use the same data cost ratios discussed above to extrapolate data volumes for the entire GIS implementation effort. Data volume is not only a disk space issue. There are inherent problems associated with managing large datasets. Large files take more computer resources to manipulate, backup, restore, copy, convert, etc. A tiling scheme (i.e. breaking the data into smaller packets for storage and manipulation) should be investigated in the pilot study as a future solution for full implementation.
Assessing the Adequacy of the Data Conversion Specifications
Data conversion specifications are provided to give the conversion vendor and the client organization a set of guidelines on what layers, features and attributes should be captured, at what precision, level of accuracy and in what format is the data to be delivered. Best intentions and reality need to meet in the pilot study to evaluate the expectations and the level of effort (costs) involved with converting the target dataset.
Ask the conversion vendor for feedback on the clarity of the specifications. Do the specs make sense? Some vendors, holding to the adage the customer is always right, will not question your specifications and will do whatever you ask no matter how in-efficient the process. Others will openly suggest alternative approaches and will seek clarifications. Note the kinds of questions they present and be open to changes early in the process.
Evaluation of logical data model and applications
Not only should the quality of the data conversion and the GIS software be reviewed in the pilot, but just as important, the logical data model needs to be reviewed. The logical data model describes how map features are defined (points, lines, polygons, annotations) and the relationships between these map features and related database tables. Running applications against the data model will allow measurement of response time that is a function of data organization.
The bottom line is does the data model make sense for all the applications being addressed in the pilot and will it be useful in the full implementation. Ask the conversion and software vendors to explain the organizational structure of the GIS data model. What are the advantages, disadvantages and tradeoffs for the model used in the pilot and ask if the same structure would work comparably in a full implementation. Look carefully for short cuts or data model changes to make a dataset work in the pilot. It may work very well for a demo on a small dataset, but it may be unwieldy in a large implementation.
GIS hardware and software performance
Test the GIS running under a variety of scenarios ranging from single to multiple users performing simple to complex tasks. Ask your software vendor to write a simple macro to simulate multiple users running a series of large database queries. Test the performance of query and display user applications while data administration functions are running.
Were the users able to learn to use the system and perform useful work?
Refined GIS Cost Estimates
By requiring the conversion vendor to keep detailed logs of conversion times for each data layer and feature type by map sheet, the client organization can project or extrapolate from the pilot data conversion to a cost for full conversion. One approach that has worked well in the past is to use parcel density as an indicator of manmade features. For example, if you compute a series of ratios of the number of buildings, light poles, miles of pavement edge, manholes, hydrants, and other features against the number of parcels in the pilot area, you can compute with pretty good certainty the number of manmade features in the remainder of the GIS implementation area. The Office of Real Property Services has a low cost ($50 / town) parcel centroid database in a GIS format that can be used as a guide for parcel density. Unfortunately physical features like streams, ponds, contours, wooded areas, wetlands, etc., do not have a direct correlation to parcels. In fact there seems to be an inverse relationship between parcel density and number of physical features. The point to be learned is that the pilot study should provide an indication of costs for a full featured/full function GIS implementation effort.
Analyzing User Feedback
Tally the number of positive responses to yes/no questions, compute an average score for user satisfaction, and compile the essay responses for content and tone. Review the complied results with all team members and management. Interview team members to clarify questions with unclear or strong responses to gain more insight. From response scorecards and comments develop an overall score to determine user satisfaction, completion of goals and objectives.
5. BENCHMARK TESTS: COMPETITIVE EVALUATION
The purpose of a benchmark is to evaluate the performance and functionality of different data conversion methods, hardware and software configurations in a controlled environment. Each software package can be compared in the same hardware environment or one software package can be compared across different hardware platforms.
By defining a uniform set of functions to be performed against a standard dataset, key advantages and disadvantages of the different configurations can be compared fairly and objectively.
Planning a Benchmark Test
As with any successful project, a detailed, thought out plan needs to be devised. It should be noted that performing a benchmark takes a large amount of effort by both the local government agency and the vendors taking part. Few firms can afford to devote large amounts of staff time and computing resources competing in benchmark tests for free. Keep that in mind as you design the benchmark to focus the tests on key issues that can be readily compared. If the benchmark will be extensive, associated costs may be incurred.
Objectives for the Test
A benchmark provides an opportunity to evaluate the claims of advanced technology and high performance presented by the marketing/sales force of competing data conversion, hardware and GIS software vendors.
The objectives of the benchmark should be defined clearly and communicated to all parties involved. Suggested objectives for each of the different types of benchmarks include testing:
Conversion Methods
- Cost effective procedures
- Sound methodology
- Quality control measures
- Compliance with conversion specifications
Hardware
- Computing performance
- Conformance to standards
- Network compatibility and interoperability
- Future growth plans and downward compatibility
Software
- Conformance to standards
- Computing speed / performance
- GIS functionality (standard and advanced)
- Can the software run on your existing hardware system
- Ease of use - menu interface, on-line help, map generation, etc.
- Ease of customization for non-standard functions
- Licensing and maintenance costs
This list of objectives is not all inclusive and should only be used as a guideline or a starting point for your organization to design a benchmark study.
Preparing Ground Rules
Based on the defined objectives, all parties involved should be aware of what will be tested, how they will be judged and what criteria will be used as a measure (i.e. low cost, high performance, good service, quality, accuracy, etc.).
- Tests to be performed should be as fair as possible
- The exact same information and datasets should be given to all vendors
- A reasonable time frame should be provided to perform the work
- No vendor should be given preferential treatment over any other and clarifications of intent should be offered to all
- Tests should be quantitatively measurable
- Hardware tests should use comparably equipped or comparably priced machines
- Software tests should be performed on the same hardware and operating system
Create scoring sheets for each aspect of the test. For subjective tests, like ease of use, have each user rate their satisfaction/dissatisfaction with the results of each phase using a numeric rank-order scheme. This won't eliminate bias but will allow impressions and opinions to be compared. For objective tests, like machine performance, record the clock speed, disk space requirements, number of button clicks, error messages, response time, etc. for each test conducted.
Preparing the Test Specifications (Preliminary Request for Proposals or RFP)
The test specifications need to outline the type of test to be conducted (conversion, hardware or software); objectives of the test; detailed description of the test; measures for compliance; and a time frame for completion.
Selecting the Participants and Location
In order to conduct a benchmark, you need knowledgeable participants (both internal and external). The internal participants should be knowledgeable regarding the topic to be tested (data conversion, hardware or software).
Selecting external participants is more involved. Situations range from not knowing any vendors to invite to how to limit the number of vendors. The smaller the number of participants the easier the final selection process will be for the local government agency.
The Request for Qualifications (RFQ) process can be used to filter or pre-qualify potential participants. GIS is a specialized field and not every business involved with computers is qualified.
Several factors should be considered when selecting vendors for a benchmark test
- Are they knowledgeable about local government agency operations
- Are they a well known company
- Are they technically qualified
- Are they experienced and have a successful track record
- Are they financially sound, insured or bonded
- Are they going to be around 5 years down the road
- Are they local or do they have a local representative
- Would their previous clients hire them again
If the RFQ and/or the RFP are written clearly and succinctly, the process will filter the participants and only those companies that specialize in the subject in question will respond.
The benchmark can occur either at the client's site or the vendor's offices. Some tests like data conversion are best conducted at the vendor site to minimize relocating staff and equipment for a test. Hardware and software benchmarks are commonly conducted at both the vendor and client site. The initial data loading, customization and testing is performed at the vendor site. Once the operations are stable, the client is invited to view the results at the vendor site, or the system is transported to the client site.
Preparing the Data
For a data conversion benchmark, provide each vendor with a set of marked up (scrubbed) set of maps, documents and coding sheets as described in the pilot study section above. If possible, provide the data conversion vendor with an example dataset from the pilot study which shows the appropriate data layering, tolerances and attributes to be captured. If not a dataset, clear specifications for how the data should appear when complete. Specify what data format (*.dxf, *.e00, *.mif, tar, zip, etc.) and what type and size of media (1/4î, 8mm or 4mm tapes) you want the data delivered in.
For a hardware or software benchmark, provide a sample dataset which contains all possible layers for inclusion in the GIS. The data could be purchased, converted during the pilot study or could be the results from a data conversion benchmark noted above. Provide sufficient documentation with the data to describe the use of the data, the organizational structure and contents.
Scheduling The Benchmark Test
Once the benchmark has been defined and agreed to by the participants, set a time for the testing to occur. Schedule a start date and a duration. Unless you specifically want to use company responsiveness as part of the test (i.e. how fast can they respond to a problem), don't require an immediate start date or extremely short time frame. There is no need to cause undue panic and stress, you want a good test.
Transmitting Application Specifications And Data To Participants
Before transmitting maps, documents or data to any vendor, make an inventory and backup copies of all items. Either specify to the vendors that the data will be provided in a single data format on a specific media, or make arrangements to provide the data in a format they can read. Be sure to test the readability of the tape or disk on a target machine in your office before sending the data out. Once the data has been verified as complete and readable, make two copies of the tapes or diskettes, one to send and one to keep as a recoverable backup for documentation of the delivery. Provide detailed instructions as to the contents of the tapes or disks and how to extract the data. List phone numbers of responsible persons should problems arise with delivery or data extraction. Ask the vendor to perform an inventory at the receiving end to acknowledge receipt of the data or documents.
On-Site Arrangements
If the tests are to be conducted at your site, make sure you have the authorization and backing of management and all personnel to be involved. Provide plenty of advanced notice and time to setup. If you are conducting hardware tests you have to decide if more than one vendor's machines will be present at the same time for comparative testing. With both machines setup in the same room, you can conduct the exact same tests in "real time" and visually compare the results, but this will require more setup space and logistic leeway in the schedule to accommodate multiple vendors. Make sure you have a suitable environment for equipment with adequate power, air conditioning and security. Also make sure you have all required utility software in place to read and write compressed files from tape and virus detection software
If you are performing software tests, make sure you have two or more machines with the exact same hardware and operating system configurations. If you can't have multiple machines, be sure to backup and restore the current operating system files before testing each software package to ensure a fair test of disk space requirements, resource usage and functionality. Always use the same datasets for each test.
Identifying Deficiencies In Specifications
Although the tests were well thought out and carefully followed, you will probably wish you had performed additional tests during the benchmark. If short comings are discovered early on and they do not involve major changes in direction, additional tests could be incorporated. Be sure to notify the local management, staff and vendor participants of the change in objectives.
Defining benchmark criteria
Data Conversion Issues
A standard set of tests need to be performed to evaluate the results of a data conversion benchmark. Overlaying checkplots with the source documents on a light table is a straightforward but time consuming way to compare the conversion results. Suggestions made in the Pilot Study section of this document, outline methods for using GIS query and display functions to determine if all the data is present, layered correctly and attribute values are within range. Displaying map features by attributes will highlight errors or items out of range in different colors or symbol patterns.
GIS Software Performance
Software tests can be classified into 2 groups - capabilities and performance. Capabilities tests if the software can perform a specific task (i.e. convert DXF files, register image data, access external databases, read AutoCAD drawings, etc.) Performance deals with how well or how fast the software performs the selected task. How fast can be measured with a stopwatch, how well is open to interpretation.
The operating system on the machines in question will play a big factor in how GIS software will perform. GIS software written to run on a 32 bit operating system will not perform as well in a 16 bit environment without work arounds. Likewise, a 16 bit application will run faster on a 32 bit machine, but will not run as well as 32 bit software on a 32 bit operating system like UNIX, Windows 95 or Windows NT.
Hardware Performance
The goal is to find the fastest, cheapest hardware to meet your budget. Take advantage of computer magazine reviews of hardware. They conduct standard benchmark tests involving word processing, spreadsheets and graphics packages. The test results won't be GIS specific, but will show the overall performance of a given computer. Oddly enough, two computers with seemingly identical hardware specifications (clock speed, memory, and disk space) can perform very differently based on internal wiring, graphics acceleration and chip configurations.
Evaluating Benchmark Results
If the questions were formulated clearly, and the results were recorded honestly, evaluating the results of the benchmark should be the process of simple addition. Essay responses and comments will have to be followed up with further tests to clarify any problems or differences encountered.

