As many readers will know, the Army has a new process for selecting officers for battalion command. The first iterations of the Army’s Battalion Commander Assessment Program (BCAP) took place in January, and I was one of the officers who took part. My performance there will likely determine if I will continue to progress as a leader of soldiers in the United States Army.
Each of us who participated in BCAP have provided significantly more data to the Army in order to deem us either ready or not ready for battalion command. The stakes are high for officers in consideration; many of them feel that selection for battalion command is the defining achievement of a career as an Army officer. With the institution of the BCAP, the rules for reaching that goal seem to have changed suddenly, frustrating and angering many of those invested in the current system. Many of my peers find themselves questioning if their efforts over the last 16–18 years still mean anything.
After taking part in the program and reflecting on the experience since its completion, I have reached two conclusions. First, I believe the BCAP will make the process of selecting battalion commanders more fair. And second, the Army needs to take steps like this to stay relevant.
An Even Playing Field
The BCAP did something truly unique for an organization as large as the Army: it made a supreme effort to use the same yardstick for everyone. Maj. Gen. JP McGee, the director of the Army Talent Management Task Force (ATMTF), which was responsible for establishing the BCAP process, briefed us upon arrival at Fort Knox that the BCAP cadre would administer every assessment professionally and consistently for each of the more than 760 candidates split into eighteen cohorts. It was evident to me during each BCAP event that great lengths were taken by the ATMTF to ensure minimal variance of measurement in order to give the Army as fair a comparison of us to one another.
Take, for example, our height and weight measurements—one of the first assessments to ensure we met Army standards for body composition. The assessment was done in the same room of the same building with the same examiners on the same scale at the same time of day for everyone. This was followed the next morning by the Army Physical Fitness Test (APFT), which was administered indoors to ensure uniform conditions for each candidate. Push-ups and sit-ups were graded by the same graders, using the same strict Army standard.
If we are honest with ourselves, something as simple as enforcing a single standard on push-ups at a unit is challenging for a variety of reasons. Often the people grading are the people who also stand to benefit from increases in unit physical performance stats. Maybe it’s a friend or a superior taking the test. How willing were we to say, “Sir/Ma’am, that last rep didn’t count, your elbows didn’t lock out completely”? The style in which this APFT was administered created a basis for fair comparison between candidates. Frankly, I finished with one of the worst PT scores I have had in the last decade—but I was confident everyone else had a similar experience. Twenty-one and three-quarters laps on an elevated track around an indoor gym was miserable for the two-mile run, but everyone ran the same course with the same HVAC system causing the same dry cough.
The APFT was followed by a battery of psychometric tests and writing assessment to measure overall cognitive abilities and to scan for any potential issues regarding a candidate’s psychological health. I come from a background in Army Special Forces and have undergone similar testing before. Seeing it here, though, felt important. The Army should have results of such testing for every single potential battalion commander, given the impacts such leaders will have within their respective units. This data enables an apples-to-apples comparison of officers’ raw cognitive ability, instead of relying primarily on rater and senior-rater assessments, which have no formal reporting mechanism. The current field-grade officer evaluation report (OER) doesn’t even have a specific spot for raters or senior raters to convey their assessments of a rated officers’ intelligence; for now such an assessment can only be inferred from a senior rater’s comments.
For the past seventy years, Army promotion and command selection boards have predominantly relied on OERs, particularly the senior rater’s comments, to assess the quality and potential of officers. While that has worked well for the most part, that process has its shortcomings. The most significant issue is how much the quality of the senior rater’s writing affects how a promotion or command selection board will grade an officer relative to his or her peers. Whether the senior rater writes well is independent of the rated officer’s true quality, performance, or potential. Additionally, senior raters can only comment on officers under their command, making it very difficult to assess how officers from different units, rated by different commanders, compare to one another. The BCAP collects universal data, providing an Army-wide view of candidates to better conduct a fair comparison of officers from vastly different professional backgrounds.
The final event of the BCAP has perhaps received the most attention due to its novel use of a “blind” panel. Initially, this format concerned me because it seemed to eliminate the evaluation of criteria I thought was important for battalion command—presence, professional appearance, and use of nonverbal communication. However, we were told that the blind interview had two specific objectives: (1) to determine if an officer was ready for command; and (2) to assess the verbal communication skills of the officer. Given those limited objectives, the blind setup made sense.
The blind panel’s first objective of assessing each officer’s readiness for command was a pass-fail determination; the interview did not have to grade an officer with enough fidelity to move them up or down the overall rank order of assessed officers. It only had to determine if a candidate exhibited characteristics that convinced a majority of the panel members that he or she was not ready to command a battalion. I was not privy to the exact criteria, but I suspected some indicators—like toxic leadership traits, for example—might have met that threshold.
The second objective—assessing verbal skills—didn’t require the panel to see the officer. Panel members just needed to hear what that officer was saying and assess how effective he or she communicated according to a rubric shared with candidates before our arrival at Fort Knox.
Overall, the quality that most clearly characterized all of the events and the way the BCAP was conducted was consistency. Data collected from these events will make the process of selecting battalion commanders more fair because it allows for more even comparison between officers with a wide variety of professional backgrounds. All the data were collected the same way, for each and every officer under consideration.
A Change the Army Needs
The Army needs to continue honestly and realistically assessing its many programs and systems in order to stay competitive in the contemporary operating environment—and make bold changes when change is required. That’s exactly what the BCAP represents. From our in-brief, we were told that the Army is seeking to transform itself from a force meant to dominate in the Industrial Age into one agile enough to win in the Information Age. The BCAP is one of the first cornerstones laid in a broader modernization strategy to accomplish that mission.
Modernizing our Army for the Information Age is no small task. Efforts to upgrade cyber and networking capabilities are the types of initiatives that seem to capture the most attention, and while those are important, so are things like talent management. The Army must incorporate Information-Age capabilities such as data-driven decision making into the talent-management process. We need updated methods and tools to collect data about ourselves and each other, and then to further parse and analyze that data. It might sound cliché to make this comparison, but this transformation is like Moneyball for the Army. Professional baseball teams realized some of the information driving acquisition decisions, like a player’s height or how fluid his pitching form was, weren’t directly tied to making the team win more. However, there was player data available that had a direct tie-in to achieving more wins: on-base percentages. A player that got on base more than another would generate more opportunities to score, which translated into more runs and more victories. The takeaway here for the Army is that with a little trust, the right data can help us create a competitive advantage.
Accepting that data-driven decisions have a role in our modern Army is a step in the right direction. The BCAP, undertaken on the direct authority of the chief of staff of the Army, means more than just selecting the right battalion commanders. It is a demonstration of trust in a selection methodology that carves out a space for data analysis. The human element still plays the biggest role in deciding if and where an officer will command a battalion. Assessment of past performance through the lens provided by OERs makes up the greatest share of the formula that calculates a candidate’s overall position on the battalion command order-of-merit list. The important thing is that the Army is bringing in data analysis where previously there was none.
For some, this seems like an obvious step for the Army to take, but that is a pretty radical concept for an Army culture that has historically sought to empower leaders at the lowest level and honor the sanctity of a commander’s personal judgment. With the BCAP, assuming the Army leadership stands by the results, we are taking some weight from what commanders, in their capacity as senior raters, have said about their folks and reallocated that to the raw data collected during the BCAP.
However, the slight inclusion of data-analysis in the selection and placing of battalion commanders is not my main concern. What struck me several days after I returned from the BCAP was the question that is the underlying burning question for the wider Army—how do we know we are collecting the right data? Do soldiers of higher cognitive ability actually make better commanders? Can we say the same of APFT scores? The BCAP collected comprehensive data on a single year group of aspiring battalion commanders, and symbolically that means a lot for the reasons I gave above, but we have to understand what it is—a single data pull at one point in time from a particular year group of officers. The Army doesn’t have a way to take these results and compare them against historically successful commanders. Other than collective assumptions about desirable characteristics in a commander, we don’t know for sure which have a causal relationship with mission success or to what degree. Going back to the Moneyball example, it’s not as clear cut as “they get on base.” We are still just analyzing the BCAP data and rank ordering candidates using the Army’s assumptions about what makes a good commander.
The Army had to start somewhere, though. Beginning to collect data had to happen. Sure, many of my peers wished it hadn’t started with us, but I think all of us know that we’re part of the Army team, and someone had to execute the task of getting assessed. I just hope that this effort continues for the long term and longitudinal studies are produced as a result. Ideally, when the time comes for my year group to take battalion command, the Army would also collect data on a wide range of aspects that define “success” for command of a unit, such as soldier retention, combat efficiency during rotations at the combat training centers, or how well a commander’s subordinates end up scoring when they’re eventually eligible for the BCAP. That unit performance data in combination with what was gathered from the BCAP would help to objectively identify the constitution of a good commander.
After some reflection, I realized my peers and I shouldn’t overthink how to maximize our personal performance at the BCAP. Our mission there wasn’t to get selected for a battalion command; it was: to help the Army find the best possible battalion commanders. Taking a hard look at myself, I do strongly desire to return to command a Special Forces battalion, but I know most of the other officers also in consideration and they are all superb. The Special Forces Regiment will keep “rolling along,” even if I do not come back to command. So putting my pride and personal desire for a tactical battalion command aside, the real question is: How best can I serve the Army in the years ahead? For now, participating in the BCAP is the best way to help the Army find the answer.
Lt. Col. Vincent Enriquez currently serves as a Military Aide to the Vice President. He was originally commissioned as an Army Engineer after graduation from West Point in 2003. He later completed the Special Forces Qualification Course, earning the Green Beret and served in 1st Special Forces Group (Airborne) and 1st Special Forces Command (Airborne). He was previously a Wayne A. Downing Scholar.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Pvt. Matthew Marcellus, US Army
I would like to think Lt. Col. Vincent Enriquez for a very in depth look into BCAP. I am finding the Armys tranformation into forward thinking so refreshing. Instead of an OER tunnel vision process. Question of interest. Did you get selected for Bn Commander position? Thank You
Well written and cogent…the general import of the comment is encouraging in that the Army is examining a more relevant and effective approach to selecting one of its most critical corps of leaders in their operational leadership contribution when activated and in their potential benefit to the service in their development toward critical Army senior leadership needs. But having experienced earlier iterations of the processes introduced through several decades I always retain reservations about possible negatives along with the positives. Most system developers tend to be conceptual people. Most system appliers and practitioners are not. This is a immutable fact of our reality and translates to some dysfunction over time, for example, a previous iteration of a version of such selection introduced in the late 1980s euphemistically referred to as the "Youth Leadership" initiative arguably had some downsides over the following years on the effect of the leaders who emerged from this process after its introduction. As always, some positives mixed with negatives resulted….but on whole what was the long term effect on say level of toxic leadership or on numbers of highly effective leaders that built truly effective mission capable teams at every level of command? Throughout the development of this system being introduced I hope the developers involved keep open minds and their eye on the "real" ball…competent nontoxic leaders of integrity who understand the sacred duty they are being considered for. The last thing a battalion needs is a narcissist.
The one aspect that I did not see evaluated was what did this officer’s battalion command sergeant major and other senior NCO’s think about his leadership style and effectiveness.
Senior NCO’s are key to an officer’s development and I trust they also have the Army’s best interest at hand.
Vincent, thanks for writing this. I am curious about the overall weighting of BCAP to OERs.
As part of the psychological/ leader profile a 360 assessment was done on each candidate. prior to the BCAP event, organizers solicited feedback from subordinates and peers on each candidate. The candidate did not have any role in selecting who would provide commentary, so there was no “survey packing.” Each candidate also answered a similar survey about themselves. This allowed the evaluation team to assess emotional intelligence and perception of potential candidates. The range of surveys solicited allowed assessors to eliminate significant outliers for any individuals with an ax to grind. The 360 assessment was aggregated into the psychological profile of the candidate. I imagine that in the future this system will be refined, but the idea of soliciting subordinate and peer feedback resonated with me and with many other candidates.
Thanks for a great rundown and a mature look at this important process. One thing you address but not directly is the consistent commentary I hear on social media and in the last Pre-command Course of the fear that senior rater evals clearly don’t mean anything anymore. Basically future Brigade CDRs not understanding that the ONLY reason you and fellow officers were even at BCAP was because leaders (BN but more BDE Commanders) chose you to be there through there evaluation. Congrats on the selection and good luck wherever you command!
Just to make your point that Army standards as simple as the push-up on the APFT are subjective, your example of “Sir/Ma’am, that last rep didn’t count, your elbows didn’t lock out completely” for the push-up does not sound correct. Is the standard to "lock out your elbows" or "raise your entire body until your arms are fully extended"? If you don't know the difference your point about how subjective Army standards are just got more complicated. Yes, I agree there should be an "even playing field" and that starts with knowing what the standards are and are not. As leaders, we need to become expert enough to understand the standard and the spirit of that requirement to properly train and discipline those you lead as well as assist those above us in making sound decisions for the good of the Army.. Unfortunately, life is not fair and never will be… But I applaud the effort.