|
|
|
|
Lesson#20
|
Collecting and Analyzing Diagnostic information
|
|
|
|
Collecting and Analyzing Diagnostic information
Dimensions to Consider in Diagnosis:
In addition to the importance of the consultant having
descriptive, analytic, and diagnostic theories, a
number of other dimensions are important for the consultant to
consider. A description of seven such
dimensions follows:
1. Timing of the
diagnostic activities is a significant dimension. For example, it is one thing
to collect
and analyze data and then to develop a strategy for how to use
it, but quite another to gather data
about the perceived usefulness and timeliness of doing a survey
in the first place. Much time and
resources can be wasted if organizational participants are not
prepared to work with the data.
2.Extent of
participation is a key aspect of diagnosis. Who, in a preliminary way, decided
that
diagnosis should take place? Who decided how it should be done?
Which people were
systematically involved in supplying data, and further in
analyzing and describing the dynamics
revealed by the data? One person? Two people? The top team? The
top team plus others? One or
more people in conjunction with a consultant? All of the members
of the system or subsystem?
One of the underlying assumptions is the efficacy of
participative problem identification and
diagnosis in contrast to unilateral problem identification and
diagnosis.
3. The dimension of
confidentiality, or individual-anonymous vs. group surfacing of data, has
important facets. In the early stages of an OD effort, when
trust between group members may be
low and their feedback skills inadequate, the situation may call
for individual interviews, with
responses kept anonymous and only reported to the group in terms
of themes. As trust is earned
and grows, people can become more open in terms of surfacing
attitudes, feelings, and perceptions
about organizational dynamics in group settings.
4.The degree to which
there was pre-selection of variables vs. emergent selection of variables to be
considered is another important dimension. For survey feedback
different questionnaires which
taps some 19 dimensions under three broad categories:
leadership, organizational climate, and
satisfaction, are used. Another, “Managerial Grid”, focuses on
two dimensions: concern for people
and concern for production. Some OD consultants use interviews
asking two or three questions,
such as: What things are going well in the organization? What
problems do you see?
5.The extent to which
data gathering and analysis are isolated events in contrast to being part of a
long-range strategy is also important. One usual assumption in
OD efforts is that diagnostic
activities should be part of an overall plan. Diagnostic
activities lead to action program that in turn
call for diagnostic activities – this is the action research
model.
6. Diagnostic
activities that are not part of any such plan that are prompted by someone’s
whim to
know “what they are thinking” may produce resentment and
resistance and can seriously hinder
attempts to get valid data from system members.
7. The nature of the
target population in both preliminary and later systematic data gathering and
analysis is also a key dimension. The size and nature of the
target group can affect the acceptability
of the diagnostic process, what kind of interdependencies can be
examined, and what kinds of
issues can be worked successfully. The data-providing group can
be different from the dataanalyzing
group, but in OD, suppliers of the information usually work with
their own data in intact
work teams.
And finally, the type of technique used obviously has a number
of important ramifications. By type we
mean questionnaire-versus-interview techniques,
individual-versus-group surfacing of data, or other
categories of techniques that can be differentiated in major
ways. As another example of the importance of
technique selection an interview can be used for trust building
as well as collecting data; a face-to-face
conversation is a better vehicle building a relationship than
sending someone a questionnaire. Concerns can
be expressed and responded to, questions can be answered, and
assurances can be provided as how the
data will be used, and so on. As another example of the
importance of the type of technique selected,
giving diagnostic assignments to subgroups in a workshop setting
can be a powerful diagnostic technique.
But the way these groups are constituted- for example,
heterogeneous versus homogenous in terms of rank,
position, or aggressiveness-resistance – can be crucial to the
amount and candor of the data generated.
Collecting and Analyzing Diagnostic information:
Organization development is vitally dependent on organization
diagnosis: the process of co1lecing
information that will be shared with the client in jointly
assessing how the organization is functioning and
determining the best change intervention. The quality of the
information gathered, therefore, is a critical
part of the OD process.
Data collection involves gathering information on specific
organizational features, such as the inputs,
design components, and outputs as discussed earlier. The process
begins by establishing an effective
relationship between the OD practitioner and those from whom
data will be collected and then choosing
data-collection techniques. Four methods can be used to collect
data: questionnaires, interviews,
observations, and unobtrusive measures. Data analysis organizes
and examines the information to make
clear the underlying causes of an organizational problem or to
identify areas for future development. The
overall process of data collection, analysis, and feedbacks is
shown in Figure 26.
The Data Collection and Feedback Cycle
The Diagnostic Relationship:
In most cases of planned change, OD practitioners play an active
role in gathering data from organization
members for diagnostic purposes For example, they might
interview members of a work team about causes
of conflict among members; they might survey employees at a
large industrial plant about factors
contributing to poor product quality. Before collecting
diagnostic information, practitioners need to
establish a relationship with those who will provide and
subsequently use it. Because the nature of that
relationship affects the quality and usefulness of the data
collected, it is vital that OD practitioners clarify
for organization members who they are, why the data are being
collected, what the data gathering will
involve, and how the data will be used. That information can
help allay people’s natural fears that the data
might be used against them and gain members’ participation and
support, which are essential to developing
successful interventions.
Establishing the diagnostic relationship between the consultant
and relevant organization members is
similar to forming a contract. It is meant to clarity
expectations and to specify the conditions of the
relationship. In those cases where members have been directly
involved in the entering and contracting
process described earlier, the diagnostic contract will
typically be part of the initial contracting step. In
situations where data will be collected from members who have
not been directly involved in entering and
contracting, however, OD practitioners will need to establish a
diagnostic contract as a prelude to
diagnosis. The answers to the following questions provide the
substance of the diagnostic contract:
1. Who am I
?
The answer to this question introduces the OD practitioner to the organization,
particularly to those members who do not know the consultant and
yet will be asked to provide diagnostic
data.
2. Why am I here, and what am I doing?
These answers are aimed at defining the goals of
the
diagnosis and data-gathering activities. The consultant needs to
present the objectives of the action research
process and to describe how the diagnostic activities fit into
the overall developmental strategy.
3.
Who do I work
for?
This answer clarifies who has
hired the consultant, whether it is a
manager, a group of managers, or a group of employees and
managers. One way to build trust
and support for the diagnosis is to have those people directly
involved in establishing the diagnostic
contract. Thus, for example, if the consultant works for a joint
labor—management committee,
representatives from both sides of that group could help the
consultant build the proper relationship with
those from whom data will be gathered.
4. What do I want from you, and why?
Here, the consultant needs to specify how much
time and
effort people will need to give to provide valid data and
subsequently to work with these data in solving
problems. Because some people may not want to participate in the
diagnosis, it is important to specify that
such involvement is voluntary.
5.
How will I
protect your confidentiality?
This
answer addresses member concerns about who
will see their responses and in what form. This is especially
critical when employees are asked to provide
information about their attitudes or perceptions. OD
practitioners can either ensure confidentiality or state
that full participation in the change process requires open
information sharing. In the first case, employees
are frequently concerned about privacy and the possibility of
being punished for their responses. To
alleviate concern and to increase the likelihood of obtaining
honest responses, the consultant may need
to assure employees of the confidentiality of their information,
perhaps through explicit guarantees of
response anonymity. In the second case, full involvement of the
participants in their own diagnosis may
be a vital ingredient of the change process. If sensitive issues
arise, assurances of confidentiality can
co-opt the OD practitioner and thwart meaningful diagnosis. The
consultant is bound to keep
confidential the issues that are most critical for the group or
organization to understand. OD
practitioners must think carefully about how they want to handle
confidentiality issues.
6. Who will have access to the data?
Respondents typically want to know whether they
will have
access to their data and who else in the organization will have
similar access. The OD practitioner needs to
clarify access issues and, in most cases, should agree to
provide respondents with their own results. Indeed,
the collaborative nature of diagnosis means that organization
members will work with their own data to
discover causes of problems and to devise relevant
interventions.
7.
What’s in it
for you?
This answer is aimed at
providing organization members with a clear
delineation of the benefits they can expect from the diagnosis.
This usually entails describing the
feedback process and how they can use the data to improve the
organization.
8. Can I be trusted?
The diagnostic relationship ultimately rests on the trust established
between the consultant and those providing the data. An open and
honest exchange of
information depends on such trust, and the practitioner should
provide ample time and face-toface
contact during the contracting process to build this trust. This
requires the consultant to
listen actively and discuss openly all questions raised by
participants.
Careful attention to establishing the diagnostic relationship
helps to promote the three goals of data
collection. The first and most immediate objective is to obtain
valid information about organizational
functioning. Building a data-collection contract can ensure that
organization members provide honest,
reliable, and complete information.
Data collection also can rally energy for constructive
organizational change. A good diagnostic relationship
helps organization members start thinking about issues that
concern them, and it creates expectations that
change is possible. When members trust the consultant, they are
likely to participate in the diagnostic
process and to generate energy and commitment for organizational
change.
Finally, data collection helps to develop the collaborative
relationship necessary for effecting organizational
change. The diagnostic stage of action research is probably the
first time that most organization members
meet the OD practitioner, and it can be the basis for building a
longer-term relationship. The datacollection
contract and subsequent data-gathering and feedback activities
provide members with
opportunities for seeing the consultant in action and for
knowing her or him personally. If the consultant
can show employees that she or he is trustworthy, is willing to
work with them, and is able to help improve
the organization, then the data-collection process will
contribute to the longer-term collaborative
relationship so necessary for carrying out organizational
changes.
The Data-Collection Process:
The process of collecting data is an important and significant
step in an OD program. During this stage, the
practitioner and the client attempt to determine the specific
problem requiring solution. After the
practitioner has intervened and has begun developing a
relationship, the next step is acquiring data and
information about the client system.
This task begins with the initial meeting and continues
throughout the OD program. The practitioner is, in
effect, gathering data and deciding which data are relevant
whenever he or she meets with the client,
observes, or asks questions. Of all the basic OD techniques,
perhaps none is a fundamental as data
collection. The practitioner must be certain of the facts before
proceeding with an action program. The
probability that an OD program will be successful is increased
if it is based upon accurate and in-depth
knowledge of the client system.
Information quality is a critical factor in any successful
organization. Developing an innovative culture and
finding new ways to meet customer needs are strongly influenced
by the way information is gathered and
processed. Organization development is a data-based change
activity. The data collected are used by the
members who provide the data, and often lead to insights into
ways of improving effectiveness. The datacollection
process itself involves an investigation, a body of data, and
some form of processing
information. For our purposes, the word
data
,
which is derived from the Latin verb dare, meaning “to
give, is most appropriately applied to unstructured, unformed
facts. It is an aggregation of all signs, signals,
clues, facts, statistics, opinions, assumptions, and
speculations, including items that are accurate and
inaccurate, relevant and irrelevant. The word
information
is derived from the Latin verb informare,
meaning “to give form to,” and is used here to mean data that
have form and structure. A common
problem in organizations is that they are data-rich but
information poor: lots of data, but little or no
information.
An OD program based upon a systematic and explicit investigation
of the client system has a much higher
probability of success because a careful data collect on phase
initiates the organization’s problem solving
process and provides a foundation for the following stages. This
section discusses the steps involved in the
data-collection process.
The Definition of Objectives:
The
first
and most obvious step in data collection
is defining the objectives of the change program. A clear
understanding of these broad goals is necessary to determine
what information is relevant. Unless the
purpose of data collection is clearly defined, it becomes
difficult to select methods and standards. The OD
practitioner must first obtain enough information to allow a
preliminary diagnosis and then decide what
further information is required to verify the problem
conditions. Usually, some preliminary data gathering
is needed simply to clarify the problem conditions before
further large-scale data collection is undertaken.
This is usually accomplished by investigating possible problem
areas and ideas about what an ideal
organization might be like in a session of interviews with key
members of the organization. These
conversations enable the organization and the practitioner to
understand the way things are, as opposed to
the way members would like them to be.
Most practitioners emphasize the importance of collecting data
as a significant step in the OD process.
First, data gathering provides the basis for the organization to
begin looking at its own processes, focusing
upon how it does things and how this affects performance.
Second, data collection often begins a process
of self-examination or assessment by members and work teams in
the organization, leading to improved
problem solving capabilities.
The Selection of Key Factors:
The
second
step in data collection is to identity the
central variables involved in the situation (such as
turnover, breakdown in communication and isolated management).
The practitioner and the client decide
which factors are important and what additional information is
necessary for a systematic diagnosis of the
client system’s problems. The traditional approach was to select
factors along narrow issues, such as pay
and immediate supervisors, more recently; the trend has been to
gauge the organization’s progress and
status more broadly. Broader issues include selecting factors
that determine the culture and values of the
organization.
Organizations normally generate a considerable amount of “hard”
data internally, including production
reports, budgets, turnover ratio, sales per square foot, sales
or profit per employee and so forth, which may
be useful as indicators of problems. This internal data can be
compared with competitor’s data and industry
averages. The practitioner may find, however, that it is
necessary to increase the range of depth of data
beyond what is readily available. The practitioner may wish to
gain additional insights into other dimensions
of the organizational system, particularly those dealing with
the quality of the transactions or relationships
between individuals or groups. This additional data gathering
may examine the following dimensions:
• What is the degree of
dependence between operating teams, departments or units?
• What is the quantity and
quality of the exchange of information and communication between
units?
• What is the degree to
which the vision, mission, and the goals of the organization are shared and
understood by members?
• What are the norms,
attitudes, and motivations of organization members?
• What are the effects of
the distribution of power and status within the system?
In this step, the practitioner and client determine which
factors are important and which
factors can and should be investigated.
The Selection of a Data-Gathering Method:
The third step in data collection is selecting a method of
gathering data. There are many different types of
data and many different methods of tapping data sources. There
is no one best way to gather data - the
selection of a method depends on the nature of the problem.
Whatever method is adopted data should be
acquired in a systematic manner thus allowing quantitative or
qualitative comparison between elements of
the system. The task in this step is to identify certain
characteristics that may be measured to help in the
achievement of the OD program objective and then to select an
appropriate method to gather the required
data. Some major data collecting methods follow.
Methods for Collecting Data:
The
four major
techniques
for gathering diagnostic
data are questionnaires, interviews, observations, and
unobtrusive measures. Table 3 briefly compares the methods and
lists their major advantages and
problems. No single method can fully measure the kinds of
variables important to OD because each has
certain strengths and weaknesses. For example, perceptual
measures, such as questionnaires and surveys,
are open to self-report biases, such as respondents’ tendency to
give socially desirable answers rather than
honest opinions. Observations, on the other hand, are
susceptible to observer biases, such as seeing what
one wants to see rather than what is really there. Because of
the biases inherent in any data-collection
method, we recommend that more than one method be used when
collecting diagnostic data. If data from
the different methods are compared and found to be consistent,
it is likely that the variables are being
measured validly. For example, questionnaire measures of job
discretion could be supplemented with
observations of the number and kinds of decisions employees are
making. If the two kinds of data support
one another, job discretion is probably being accurately
assessed. If the two kinds of data conflict, then the
validity of the measures should be examined further— perhaps by
using a third method, such as interviews.
Table 3: A Comparison of Different Methods of Data Collection
A Comparison of Different Methods of Data Collection
Method Major Advantages Major Potential Problems
Questionnaires •Responses
can be quantified and
easily summarized
•Easy to use with large
samples
•Relatively inexpensive
•Can obtain large volume
of data
•non-empathy
•Predetermined
questions/missing
issues
•Over-interpretation of
data
•Response bias
Interviews •adaptive-allows
data collection on
a range of possible subjects
•Source of “rich” data
•Empathic
•Process of interviewing
can build
rapport
•Expense
•Bias in interviewer
responses
•coding and interpretation
difficulties
•self-report bias
Observations •collects
data on behavior, rather
than reports of behavior
•Real time, not
retrospective
•Adaptive
•coding and interpretation
difficulties
•Sampling inconsistencies
•Observer bias and
questionable
reliability
•Expense
Unobtrusive measures •Non-reactive-
no response bias
•High face validity
•Easily quantified
•Access and retrieval
difficulties
•Validity concerns
•Coding and interpretation
difficulties
Questionnaires:
One of the most efficient ways to collect data is through
questionnaires. Because they typically contain
fixed-response queries about various features of an
organization, these paper-and-pencil measures can be
administered to large numbers of people simultaneously. Also,
they can be analyzed quickly, especially with
the use of computers, thus permitting quantitative comparison
and evaluation. As a result, data can easily
be fed back to employees. Numerous basic resource books on
survey methodology and questionnaire
development are available.
Questionnaires can vary in scope, some measuring selected
aspects of organizations and others assessing
more comprehensive organizational characteristics. They also can
vary in the extent to which they are either
standardized or tailored to a specific organization.
Standardized instruments generally are based on an
explicit model of organization group, or individual
effectiveness and contain a predetermined set of
questions that have been developed and refined over time.
Several research organizations have been highly instrumental in
developing and refining surveys. The
institute for Social Research at the University of Michigan and
the Center for Effective Organizations at the
University of Southern California are two prominent examples.
Two of the institute’s most popular
measures of organizational dimensions are the Survey of
Organizations and the Michigan Organizational
Assessment Questionnaire. Few other instruments are supported by
such substantial reliability and validity
data. Other examples of packaged instruments include Weisbord’s
Organizational Diagnostic
Questionnaire, Dyer’s Team Development Survey, and Hackman and
Oldham’s Job Diagnostic Survey. In
fact, so many questionnaires are available that rarely would an
organization have to create a totally new one.
However, because every organization has unique problems and
special jargon for referring to them, almost
any standardized instrument will need to have
organization-specific additions, modifications, or omissions.
Customized questionnaires, on the other hand, are tailored to
the needs of a particular client. Typically, they
include questions composed by consultants or organization
members, receive limited use, and do not
undergo longer-term development. They can be combined with
standardized instruments to provide valid
and reliable data focused toward the particular issues facing an
organization.
Questionnaires, however, have a number of draw backs that need
to be taken into account in choosing
whether to employ them for data collection. First, responses are
limited to the questions asked in the
instrument. They provide little opportunity to probe for
additional data or to ask for points of clarification,
second, questionnaires tend to be impersonal, and employees may
not be willing to provide honest answers.
Third, questionnaires often elicit response biases, such as the
tendency to answer questions in a socially
acceptable manner. This makes it difficult to draw valid
conclusions from employees’ self-reports.
Interviews:
A study of 245 OD practitioners found that interviewing is the
most widely used data- gathering technique
in OD programs. Interviews are more direct, personal, and
flexible than surveys and are very well suited for
studies of interaction and behavior. Two advantages in
particular set interviewing apart from other
techniques. First, interviews are flexible and can be used in
many different situations. For example, they can
be used to determine motives, values, and attitudes. Second,
interviewing is the only technique that
provides two-way communication. This permits the interviewer to
learn more about the problems,
challenges, and limitations of the organization. Interviewing
usually begins with the initial intervention and
is best administered in a systematic manner by a trained
interviewer. Data-gathering interviews usually last
at least one hour; the purpose is to get the interviewees to
talk freely about things that are important to
them and to share these perceptions in an honest and
straightforward manner. In the author’s experience,
people really want to talk about things that they feel are
important. If the OD practitioner asks appropriate
questions, interviewing can yield important results.
The advantage of the interview method is that it provides data
that are virtually unobtainable through other
methods. Subjective data, such as norms, attitudes, and values,
which are largely inaccessible through
observation, may be readily inferred from effective interviews.
The disadvantages of the interview are the
amount of time involved, the training and skill required of the
interviewer, the biases and resistances of the
respondent and the difficulty of ensuring comparability of data
across respondents.
The interview itself may take on several different formats. It
can be directed or non-directed. In a
directed
interview,
certain
kinds of data are desired, and therefore specific questions are asked. The
questions are
usually formulated in advance to ensure uniformity of responses.
The questions themselves may be open
Organization
Development – MGMT 628 VU
ended or closed.
Open- ended
questions allow the
respondent to be free and unconstrained in answering,
such as “How would you describe the work atmosphere of this
organization?” The responses may be very
enlightening, but may also be difficult to record and quantify.
Closed questions,
which can be answered
by a yes, no. or some other brief response, are easily recorded
and are readily quantifiable.
In a
non-directed
interview
the interview’s direction is
chosen by the respondent, with little guidance or
direction by the interviewer. If questions are used in a
non-directed interview, open-ended questions will be
more appropriate than closed questions. A non-directed interview
could begin with the interviewer saying,
“Tell me about your job here.” This could be followed by “You
seem to be excited about your work.” The
data from such an interview can be very detailed and
significant, but difficult to analyze because the
interview is unstructured.
Interviews may be highly structured, resembling questionnaires,
or highly unstructured, starting with
general questions that allow the respondent to lead the way.
Structured interviews typically derive from a
conceptual model of organization functioning; the model guides
the types of questions that are asked. For
example, a structured interview based on the organization-level
design components would ask managers
specific questions about organization structure, measurement
systems, human resources systems, and
organization culture.
Unstructured interviews are more general and include broad
questions about organizational functioning,
such as:
• What are the major goals
or objectives of the organization or department?
• How does the
organization currently perform with respect to these purposes?
• What are the strengths
and weaknesses of the organization or department?
• What barriers stand in
the way of good performance?
Although interviewing typically involves one-to-one interaction
between an OD practitioner and an
employee, it can be carried out in a group context. Group
interviews save time and allow people to build on
others’ responses. A major drawback, however, is that group
settings may inhibit some people from
responding freely.
A popular type of group interview is the focus group or sensing
meeting. These are unstructured meetings
conducted by a manager or a consultant. A small group of ten to
fifteen employees is selected representing
a cross section of functional areas and hierarchical levels or a
homogenous grouping, such as minorities or
engineers. Group discussion is frequently started by asking
general questions about organizational features
and functioning, an intervention’s progress, or current
performance. Group members are then encouraged
to discuss their answers more fully. Consequently, focus groups
and sensing meetings are an economical
way to obtain interview data and are especially effective in
understanding particular issues in greater depth.
The richness and validity of the information gathered will
depend on the extent to which the manager or
consultant develops a trust relationship with the group and
listens to member opinions.
Another popular unstructured group interview involves assessing
the current state of an intact work group.
The manager or consultant generally directs a question to the
group, calling its attention to some part of
group functioning. For example, group members may be asked how
they feel the group is progressing on
its stated task. The group might respond and then come up with
its own series of questions about barriers
to task performance. This unstructured interview is a fast,
simple way to collect data about group behavior.
It allows members to discuss issues of immediate concern and to
engage actively in the questioning and
answering process. This technique is limited, however, to
relatively small groups and to settings where there
is trust among employees and managers and a commitment to
assessing group processes.
Interviews are an effective method for collecting data in OD.
They are adaptive, allowing the interviewer to
modify questions and to probe emergent issues during the
interview process. They also permit the
interviewer to develop an empathetic relationship with
employees, frequently resulting in frank disclosure
of pertinent information.
A major drawback of interviews is the amount of time required to
conduct and analyze them. Interviews
can consume a great deal of time, especially if interviewers
take full advantage of the opportunity to hear
respondents out and change their questions accordingly. Personal
biases also can distort the data. Like
questionnaires, interviews are subject to the self-report biases
of respondents and, perhaps more important,
to the biases of the interviewer. For example, the nature of the
questions and the interactions between the
interviewer and the respondent may discourage or encourage
certain kinds of responses. These problems
suggest that interviewing takes considerable skill to gather
valid data. Interviewers must be able to
understand their own biases, to listen and establish empathy
with respondents, and to change questions to
pursue issues that develop during the course of the interview.)
Observations:
One of the more direct ways of collecting data is simply to
observe organizational behaviors in their
functional settings. The OD practitioner may do this by walking
casually through a work area and looking
around or by simply counting the occurrences of specific kinds
of behavior (for example, the number of
times a phone call is answered after three rings in a service
department). Observation can range from
complete participant observation, in which the OD practitioner
becomes a member of the group under
study, to more detached observation, in which the observer is
clearly not part of the group or situation
itself and may use film, videotape, and other methods to record
behaviors.
Observations have a number of advantages. They are free of the
biases inherent in self-report data. They
put the practitioner directly in touch with the behaviors in
question, without having to rely on others’
perceptions. Observations also involve real-time data,
describing behavior occurring in the present rather
than the past. This avoids the distortions that invariably arise
when people are asked to recollect their
behaviors. Finally, observations are adaptive in that the
consultant can modify what he or she chooses to
observe, depending on the circumstances.
Among the problems with observations are difficulties
interpreting the meaning underlying the
observations. Practitioners may need to devise a coding scheme
to make sense out of observations, and this
can be expensive, take time, and introduce biases into the data.
Because the observer is the data-collection
instrument, personal bias and subjectivity can distort the data
unless the observer is trained and skilled in
knowing what to look for; how, where, and when to observe; and
how to record data systematically.
Another problem concerns sampling: observers not only must
decide which people to observe; they also
must choose the time periods, territory, and events in which to
make those observations. Failure to attend
to these sampling issues can result in highly biased samples of
observational data.
When used correctly, observations provide insightful data about
organization and group functioning,
intervention success, and performance. For example, observations
are particularly helpful in diagnosing the
interpersonal relations of members of work groups. As discussed
earlier, interpersonal relationships are a
key component of work groups; observing member interactions in a
group setting can provide direct
information about the nature of those relationships.
Unobtrusive Measures:
Unobtrusive data are not collected directly from respondents but
from secondary sources, such as company
records and archives. These data are generally available in
organizations and include records of absenteeism
or tardiness; grievances; quantity and quality of production or
service; financial performance; meeting
minutes; and correspondence with key customers, suppliers, or
governmental agencies.
Unobtrusive measures are especially helpful in diagnosing the
organization, group, and individual outputs,
talked earlier. At the organization level, for example, market
share and return on investment usually can be
obtained from company reports. Similarly, organizations
typically measure the quantity and quality of the
outputs of work groups and individual employees. Unobtrusive
measures also can help to diagnose
organization-level design components—structures work systems,
control systems, and human resources
systems. A company’s organization chart, for example, can
provide useful information about organization
structure. Information about control systems usually can be
obtained by examining the firm’s management
information system, operating procedures, and accounting
practices. Data about human resources system
often are included in a company’s personnel manual.
Unobtrusive measures provide a relatively objective view of
organizational functioning. They are free from
respondent and consultant biases and are perceived as being
“real” by many organization members.
Moreover, unobtrusive measures tend to be quantified and
reported at periodic intervals, permitting
statistical analysis of behaviors occurring over time. Examining
monthly absenteeism rates, for example,
might reveal trends in employee withdrawal behavior.
The major problems with unobtrusive measures occur in collecting
such information and drawing valid
conclusions from it. Company records may not include data in a
form that is usable by the consultant. If,
for example, individual performance data are needed, the
consultant may find that many firms only record
production information at the group or departmental level.
Unobtrusive data also may have their own builtin
biases. Changes in accounting procedures and in methods of
recording data are common in
organizations, and such changes can affect company records
independently of what is actually happening in
the organization. For example, observed changes in productivity
over time might be caused by
modifications in methods of recording production rather than by
actual changes in organizational
functioning.
Despite these drawbacks, unobtrusive data serve as a valuable
adjunct to other diagnostic measures, such as
interviews and questionnaires. For example, if questionnaires
reveal that employees in a department are
dissatisfied with their jobs, company records might show whether
that discontent is manifested in
heightened withdrawal behaviors, in lowered quality work, or in
similar counterproductive behaviors.
|
|
|
|