Author: admin

  • Blade Design

    Flow deflection in turbomachines is established by stator and rotor blades with prescribed geometry that includes inlet and exit camber angles, stagger angle,camberline, and thickness distribution. The blade geometry is adjusted to the stage velocity diagram which is designed for specific turbine or compressor flow applications. Simple blade design methods are available in the open literature (see References). More sophisticated and high efficiency blade designs developed by engine manufacturers are generally not available to the public. An earlier theoretical approach by Joukowsky [1] uses the method of conformal transformation to obtain cambered profiles that can generate lift force. The mathematical limitations of the conformal transformation do not allow modifications of a cambered profile to produce the desired pressure distribution required by a turbine or a compressor blade design. In the following, a simple method is presented that is equally applicable for designing compressor and turbine blades. The method is based on (a) constructing the blade camberline and (b) superimposing a predefined base profile on the camberline. With regard to generating a base profile, the conformal transformation can be used to produce useful profiles for superposition purposes. A brief description of the Joukowsky transformation explains the methodology of symmetric and a-symmetric (Cambered) profiles.

    The transformation uses the complex analysis which is a powerful tool to deal with the potential theory in general and the potential flow in particular. It is found in almost every fluid mechanics textbook that has a chapter dealing with potential flow. While they all share the same underlying mathematics, the style of describing the subject to engineering students differ. A very compact and precise description of this subject matter is found in an excellent textbook by Spurk [2].

    Conformal Transformation, Basics

    Before treating the Joukowsky transformation, a brief description of the method
    is given below. We consider the mapping of a circular cylinder from the z-plane onto the ζ-plane, Figure 9.1. Using a mapping function, the region outside the
    cylinder in the z-plane is mapped onto the region outside another cylinder in
    the ζ-plane. Let P and Q be the corresponding points in the z- and ζ-planes
    respectively. The potential at the point P is

    Figure 9.1: Conformal transformation of a circular cylinder onto an airfoil.

    The point Q has the same potential, and we obtain it by insertion of the mapping function

    Taking the first derivative of Equation (9.2) with respect to ζ, we obtain the complex conjugate velocity ¯Vζ in the ζ plane from

    Considering z to be a parameter, we calculate the value of the potential at the point z. Using the transformation function ζ = f(z), we determine the value of ζ which corresponds to z. At this point ζ, the potential then has the same value as at the point z. To determine the velocity in the ζ plane, we form

    after introducing Equation (9.3) into Equation (9.4) and considering ¯ Vz(z) =dF/dz (9.4) is rearranged as

    Equation (9.5) expresses the relationship between the velocity in ζ-plane and the one in z-plane. Thus, to compute the velocity at a point in the ζ-plane, we divide the velocity at the corresponding point in the z-plane by dζ/dz. The derivative dF/dζ exists at all points where dζ/dz = 0. At singular points with dzeta/dz = 0, the complex conjugate velocity in the ζ plane ¯ Vζ(ζ) = dF/dζ becomes infinite if it is not equal to zero at the corresponding point in the z-plane.

    Joukowsky Transformation

    The conformal transformation method introduced by Joukowski allows mapping an unknown flow past a cylindrical airfoil to a known flow past a circular cylinder. Using the method of conformal transformation, we can obtain the direct solution of the flow past a cylinder of an arbitrary cross section. Although numerical methods of solution of the direct problem have now superseded the method of conformal mapping, it has still retained its fundamental importance. In what follows, we shall examine several flow cases using the Joukowsky transformation unction:

  • Ultra High Efficiency Gas Turbine WithStator Internal Combustion

    The new technology, the Ultra High Efficiency Gas Turbine With Stator Internal Combustion, UHEGT, deals with a new concept for the devolvement of power and aircraft gas turbine engines, where the combustion process takes place within the turbine stator rows, leading to a distributed combustion. The UHEGT-concept allows eliminating the combustion chamber resulting in high thermal efficiencies which cannot be achieved by any gas turbine engines of conventional design. The concept of the UHEGT was developed by Schobeiri described in [2] and [3]. A detailed study in [3] shows that the UHEGT-concept drastically improves the thermal efficiency of gas turbines from 5% to 7% above the current highest efficiency of 40.5% set by GT24/26 at full load. To demonstrate the innovative claim of the UHEGT-concept, a study was conducted comparing three conceptually different power generation gas turbine engines: GT-9, a conventional gas turbine (single shaft, single combustion chamber), a gas turbine with sequential combustion (concept realization by ABB:GT24/26), and a UHEGT. The evolution of the gas turbine process that represents the efficiency improvement is shown in Figure 1.15.

    Figure 1.15: T-s-diagrams of three different bas turbines.

    Starting with the base load process, Figure 1.15 (a), the first improvement is shown in Figure 1.15 (b), where the red hatched areas add to the T-s area leading to a substantial increase of thermal efficiency. The stator internal combustion represented by the UHEGT-concept shown in Figure 1.15 (c), shows a further increase of the T-s area and thus the thermal efficiency. In Figure 1.15 (c) the stator inlet temperature is kept constant. However, as shown in Figure 1.1 (d) the stator inlet temperature might be less than the maximum desired one within the first two stages. A quantitative calculation of each process is presented in Figure 1.16.

    Figure 1.16: Comparison of (a) thermal efficiencies and (b) specific work for Baseline GT, GT24/26 and UHEGT.

    It compares the thermal efficiency and specific work of baseline GT, the GT24/26, and a UHEGT with three and four stator-internal combustion, UHEGT-3S and UHEGT-4S, respectively. Maximum temperature, TIT, for all cycles are the same and equal to 1200◦ C. As shown in Figure 1.16, for the UHEGT-3S a thermal efficiency above 45% is calculated. This exhibits an increase of at least 5% above the efficiency of the current most advanced gas turbine engine which is close to 40%. Increasing the number of stator internal combustion to 4, curve labeled with UHEGT-4S, can raise the efficiency above 47% which is an enormous efficiency increase compared to any existing gas turbine engine. It should be noted that UHEGT-concept substantially improves the thermal efficiency of gas turbines, where the pressure ratio is optimized corresponding to the turbine inlet temperature. This gives UHEGT a wide range of applications from small to large size engines.

    Figure 1.17: Technology change: Substantial increase of thermal efficiency at a moderate level.

    Figure 1.17 summarizes the impact of technology change on thermal efficiency. It is remarkable that theefficiency increase of the UHEGT of 7.5% above the best existing GTs has been achieved without substantially raising the TIT.

  • Why Everything in the Universe Turns More Complex

    In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they?

    Many answers to Fermi’s “paradox” have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don’t appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception.

    A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics — the law that dictates an inevitable rise in entropy, a measure of disorder. If they’re right, complex and intelligent life should be widespread.

    In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter — living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function.

    This hypothesis(opens a new tab), formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, D.C., along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not “complete” in the sense of supplying all we need to comprehend natural phenomena; rather, evolution — biological or otherwise — introduces functions and novelties that could not even in principle be predicted from physics alone. “I’m so glad they’ve done what they’ve done,” said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. “They’ve made these questions legitimate.”

    Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative — it changes depending on context — it’s impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use.

    The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can’t predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds?

    Making Meaning

    The story begins in 2003, when the biologist Jack Szostak published a short article(opens a new tab) in Nature proposing the concept of functional information. Szostak — who six years later would get a Nobel Prize for unrelated work — wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely.

    For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity — and, by extension, encodes less information — than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome).

    But Szostak pointed out that Kolmogorov’s measure of complexity neglects an issue crucial to biology: how biological molecules function.

    In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules.

    Let’s say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it’s unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called “functional information.”

    If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well.

    Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically.

    Selected for Function

    Hazen came across Szostak’s idea while thinking about the origin of life — an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. “I concluded that talking about life versus nonlife is a false dichotomy,” Hazen said. “I felt there had to be some kind of continuum — there has to be something that’s driving this process from simpler to more complex systems.” Functional information, he thought, promised a way to get at the “increasing complexity of all kinds of evolving systems.”

    In 2007 Hazen collaborated with Szostak to write a computer simulation(opens a new tab) involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved.

    There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. “From the very first moment that we sat down and talked about ideas, it was unbelievable,” Hazen said.

    “I had got disillusioned with the state of the art of looking for life on other worlds,” Wong said. “I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we’d be able to notice life elsewhere even if it had different chemical specifics, but not so far that we’d be including all kinds of self-organizing structures like hurricanes?”

    The pair soon realized that they needed expertise from a whole other set of disciplines. “We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other’s prejudices,” Hazen said. “This is not a mineralogical problem; it’s not a physics problem, or a philosophical problem. It’s all of those things.”

    They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. “We all assumed the second law of thermodynamics supplies the arrow of time,” Hazen said. “But it seems like there’s a much more idiosyncratic pathway that the universe takes. We think it’s because of selection for function — a very orderly process that leads to ordered states. That’s not part of the second law, although it’s not inconsistent with it either.”

    Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don’t seem related to life at all.

    At first glance, it doesn’t seem a promising idea. In biology, function makes sense. But what does “function” mean for a rock?

    All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn’t enough energy available to convert them to more stable phases.

    This might seem trivial, like saying that some objects exist while other ones don’t, even if they could in theory. But Hazen and Wong have shown(opens a new tab) that, even for minerals, functional information has increased over the course of Earth’s history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn’s moon Titan — another example of an increase in functional information that doesn’t involve life.

    It’s the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity.

    Wong said their work implies three main conclusions.

    First, biology is just one example of evolution. “There is a more universal description that drives the evolution of complex systems.”

    Second, he said, there might be “an arrow in time that describes this increasing complexity,” similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time.

    Finally, Wong said, “information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.”

    In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition.

    Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable.

    In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the “next floor up.” Crucially, what matters — the criteria for selection, on which continued evolution depends — also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it.

    For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time — that they’d be stable. But once such molecules became organized into groups that could catalyze one another’s formation — what Kauffman has called autocatalytic cycles — the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there’s a kind of physics of evolution — and it’s a kind of physics we know about already.

    The Biosphere Creates Its Own Possibilities

    The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule.

    Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what’s available.

    Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged — this required a whole host of preceding, contingent but specific innovations.

    However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can’t be predicted — and yet some new functions can dictate the very rules of how the system evolves subsequently. “The biosphere is creating its own possibilities,” Kauffman said. “Not only do we not know what will happen, we don’t even know what can happen.” Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, “We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.”

    The physicist Paul Davies of Arizona State University agrees that biological evolution “generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.”

    Mathematically, a “phase space” is a way of describing all possible configurations of a physical system, whether it’s as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested(opens a new tab) that evolution in an expanding accessible phase space might be formally equivalent to the “incompleteness theorems” devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can’t be shown to be true or false. We can only decide such statements by adding new axioms.

    Davies and colleagues say that, as with Gödel’s theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn’t the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential.

    “An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,” said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, “goes right to the heart of what makes life so magical.”

    Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix — once the components of the system can reason, choose, and run experiments “in their heads” — the potential for macro-micro feedback and open-ended growth is even greater. “Technological applications take us way beyond Darwinism,” he said. A watch gets made faster if the watchmaker is not blind.

    Back to the Bench

    If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information — in effect, complexity — does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is “utterly improbable.” After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life?

    Mayr’s evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot.

    But what about the chances of such a jump happening in the first place? If the new “law of increasing functional information” is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn’t have to rely on some highly improbable chance event.

    What’s more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo’s (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds.

    Together with the chemist Lee Cronin(opens a new tab) of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients.

    “Laws for living systems must be somewhat different than what we have in physics now,” Walker said, “but that does not mean that there are no laws.” But she doubts that the putative law of functional information can be rigorously tested in the lab. “I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,” she said. “What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.”

    Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there’s no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can’t calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated — but we can still describe it approximately enough to navigate spacecraft through it.

    Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That’s because natural selection has picked out some favored compounds. There’s much more glucose in living cells, for example, than you’d expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.)

    There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational “artificial life.” Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued(opens a new tab) that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues.

    Hazen’s team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. “People are approaching us because they are desperate to find a model to explain their system,” Hazen said.

    But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It’s hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe.

  • Boeing secures contract to develop F-47, America’s next-generation fighter jet

    The United States Department of Defense has awarded aerospace giant Boeing a USD 20 billion contract to develop a new sixth-generation fighter jet under the Next Generation Air Dominance (NGAD) programme. The announcement was made by former President Donald Trump during a briefing at the White House on 21 March.

    The new aircraft, designated F-47, is intended to replace the existing F-22 Raptor and is expected to become the most advanced combat aircraft in the world. Boeing secured the deal over Lockheed Martin, which was previously considered a strong contender due to its experience with the F-22 and F-35 programmes.

    Trump stated that the fighter would feature cutting-edge stealth capabilities and would outperform any similar aircraft currently in operation worldwide. He also revealed that an experimental version of the F-47 had been secretly test-flown for nearly five years.

    The jet’s full design remains classified, with only a basic outline of the aircraft’s nose made public. Defence Secretary Pete Hegseth, who joined Trump at the announcement, described the development as a historic step for American air power.

    General David W. Allvin, Chief of Staff of the United States Air Force, added that the F-47 would serve as the centrepiece of a broader system of technologies under the NGAD programme. He emphasised that the aircraft would ensure U.S. air dominance for generations to come.

    To support the project, Boeing plans to expand its operations in St. Louis, Missouri, where production of the F/A-18 Super Hornet is due to end by 2027. Resources from the Super Hornet line will be redirected to NGAD-related work.

    Additionally, Boeing will construct three new facilities in St. Louis, including a laboratory and testing centre, an advanced coatings facility, and a final assembly hall. The company is also developing a new plant in Arizona to produce advanced composite materials for the F-47’s airframe.

    The NGAD programme is expected to cost the U.S. military over USD 28 billion through 2029, according to the Air Force’s most recent budget report. The initiative represents a long-term investment in maintaining strategic air superiority in future conflicts.

  • General Allvin outlines F-47 fighter jet development under Next-Generation Air Dominance (NGAD) programme

    The United States Air Force has awarded the contract for its Next Generation Air Dominance (NGAD) platform, officially naming the new fighter jet the F-47. The announcement was made by General David Allvin, Chief of Staff of the Air Force, who described the move as a significant step in maintaining long-term air superiority.

    According to General Allvin, the F-47 will not simply replace existing aircraft, but will redefine the future of aerial warfare. Designed to outpace and outmanoeuvre modern threats, the fighter aims to be the most advanced, adaptable and lethal platform ever developed.

    The F-47 is described as the first crewed sixth-generation fighter aircraft, developed to operate in the most complex and high-risk environments. The platform’s foundation was laid over the past five years through experimental ‘X-plane’ test flights, which validated key technologies and design concepts.

    These experimental flights played a crucial role in maturing the F-47’s capabilities and ensuring readiness ahead of full-scale production. General Allvin confirmed that the fighter will enter service during the administration of President Donald Trump.

    Officials have also highlighted the F-47’s advanced state of development, stating it surpasses the F-22 in several critical areas. Though the F-22 remains a leading air superiority fighter, the F-47 is expected to offer a generational leap in performance.

    Compared to previous aircraft, the F-47 will be more cost-effective, have greater range, and offer improved stealth and sustainability. Its modular design also means it can be adapted quickly to emerging threats, with lower manpower and infrastructure requirements.

    General Allvin reaffirmed the Air Force’s commitment to ensuring American air dominance remains unchallenged. He concluded by stating the F-47 reflects a clear promise to deter potential adversaries and secure U.S. interests around the globe.

  • U.S. Air Force unveils designations for next-generation uncrewed fighter prototypes

    On 3 March, the U.S. Air Force announced the official designation of two prototype uncrewed fighter aircraft under its Collaborative Combat Aircraft (CCA) programme: the YFQ-42A, developed by General Atomics, and the YFQ-44A, developed by Anduril. These aircraft represent a significant step forward in the development of autonomous air combat systems and are expected to begin flight testing this summer.

    Designed to operate alongside piloted aircraft, the CCA prototypes will enhance the Joint Force’s ability to maintain air superiority in future contested environments. Their capabilities focus on human-machine teaming and advanced autonomy, aimed at defeating evolving threats.

    The designation system—YFQ-42A and YFQ-44A—follows the Air Force’s standard Mission Design Series format, identifying them as prototype (Y), fighter (F), and uncrewed (Q) aircraft. Once in production, the “Y” prefix will be removed to reflect their operational status.

    General David W. Allvin, Air Force Chief of Staff, emphasised the programme’s rapid progress, noting the aircraft moved from concept to prototype in under two years. “It may be just symbolic,” he said, “but we are telling the world we are leaning into a new chapter of aerial warfare.”

    The Air Force continues to work closely with industry partners to refine and test both aircraft, with data from these efforts informing future development. These prototypes will play a pivotal role in shaping the direction of the CCA programme and advancing U.S. airpower innovation.

  • U.S. Department of Defense awards USD 7 billion for NGAP engine development programme

    On 27 January, the U.S. Department of Defense announced contracts with General Electric and Pratt & Whitney to advance the Next Generation Adaptive Propulsion (NGAP) programme. Both companies received $3.5 billion contracts for work expected to be completed by mid-July 2032, with potential value increases depending on project expansions.

    General Electric will focus on engineering and technology testing in this new phase, with core activities based in Cincinnati, Ohio. Meanwhile, Pratt & Whitney’s contract involves the design and prototyping of engines, along with their integration into next-generation combat aircraft systems.

    The NGAP programme aims to develop advanced propulsion systems for sixth-generation fighter jets, beginning with the Next Generation Air Dominance (NGAD) aircraft. The engines will feature high reliability, extended component life, low fuel consumption, and the capability to power on-board systems and advanced weaponry.

    These advancements are expected to enhance the operational capabilities of future aircraft significantly. The NGAP engines will support increased efficiency and energy capacity, meeting the demands of cutting-edge military aviation technology.

  • Lockheed Martin responds to NGAD contract loss after boeing selected for U.S. Air Force programme

    Boeing has been chosen to lead the U.S. Air Force’s Next Generation Air Dominance (NGAD) programme, a multibillion-dollar effort to develop a successor to the F-22 Raptor. The announcement was made by President Donald Trump, who confirmed Boeing’s selection to spearhead the advanced crewed fighter project.

    The NGAD initiative, expected to exceed $20 billion in its development phase, could grow to hundreds of billions in long-term procurement. The new fighter, reportedly designated the F-47, will feature advanced stealth, sensors, next-generation propulsion, and manned-unmanned teaming capabilities.

    Lockheed Martin, which previously developed the F-22 and F-35 fighters, expressed disappointment at not being selected for the programme. “While disappointed with this outcome, we are confident we delivered a competitive solution,” the company said in a formal statement.

    Despite the setback, Lockheed Martin reaffirmed its long-term commitment to advancing U.S. air dominance capabilities. “We are committed to advancing the state of the art in air dominance to ensure America has the most revolutionary systems to counter the rapidly evolving threat environment,” the statement continued.

    The defence firm also confirmed its continued collaboration with the U.S. Department of Defense on future projects. “Lockheed Martin continues to work to advance critical technologies to outpace emerging threats and deliver true 21st Century Security® solutions to our nation’s military forces,” the company added.

    The NGAD programme is a key component of the U.S. Air Force’s future combat strategy, aimed at ensuring technological superiority through distributed operations and integration with autonomous systems. With Boeing at the helm, the F-47 project is expected to define the next era of air combat capability for decades to come.

  • U.S. Navy awards USD 590 million contract to Bell and Boeing for CMV-22B Osprey aircraft

    The U.S. Navy has awarded a USD 590 million contract to the Bell Boeing Joint Program Office for the production and delivery of five CMV-22B Osprey aircraft. The fixed-price incentive modification was issued under an existing agreement and aims to strengthen logistics capabilities for carrier-based operations.

    The U.S. Navy has awarded a USD 590 million contract to the Bell Boeing Joint Program Office for the production and delivery of five CMV-22B Osprey aircraft. The fixed-price incentive modification was issued under an existing agreement and aims to strengthen logistics capabilities for carrier-based operations.

    A CMV-22B Osprey, assigned to the “Mighty Bison” of Fleet Logistics Multi-Mission Squadron (VRM) 40, lands on the flight deck of the world’s largest aircraft carrier, USS Gerald R. Ford (CVN 78), September, 19, 2024. These carrier landing qualifications are a first for the CMV-22B Osprey on a Ford-class aircraft carrier. Ford is underway in the Atlantic Ocean to further develop core unit capabilities during its basic phase of the optimized fleet response plan. (U.S. Navy photo by Mass Communication Specialist Seaman Gladjimi Balisage).

    The CMV-22B offers vertical takeoff and landing alongside the range and speed of a turboprop, allowing for quick movement of personnel and high-priority cargo to aircraft carriers. This capability is especially valuable in contested maritime environments, where traditional runways may not be accessible.

    Work on the contract will be carried out at multiple sites across the United States, including Fort Worth, Amarillo, and Red Oak in Texas; Ridley Park, Pennsylvania; East Aurora, New York; Park City, Utah; and McKinney, Texas. Additional tasks are also scheduled both within the U.S. and overseas.

    At the time of the award, USD 132.1 million from the Navy’s fiscal year 2024 aircraft procurement budget has been obligated. The Naval Air Systems Command, headquartered at Patuxent River, Maryland, will oversee the contract.

    The CMV-22B will eventually replace the ageing C-2A Greyhound, which has been in service since the 1960s. The CMV-22B provides the Navy with significant flexibility and enhanced operational capability.

    The aircraft first deployed operationally with Carrier Air Wing 2 in 2021 and is expected to become central to the Carrier Onboard Delivery (COD) mission. Its duties include transporting F-35 power modules and other mission-critical equipment across the fleet.

  • How does the critical path method apply to project manufacturing?

    The Critical Path Method (CPM) is a valuable tool for managing project manufacturing, particularly in industries like aerospace where each product is unique and complex. ​ Here’s how CPM applies to project manufacturing:

    1. Defining Activities: In project manufacturing, each operation required to produce a component or subassembly is considered an activity. These activities are detailed in routing sheets, which include information such as operation duration, labor and machine hours, and material requirements. ​
    2. Sequencing Activities: CPM helps in determining the sequence of activities. ​ By understanding the dependencies between different operations, a project manager can establish the order in which tasks need to be completed. ​ For example, in manufacturing an aircraft wing, operations like cutting, drilling, and welding must be performed in a specific sequence.
    3. Identifying the Critical Path: The critical path is the longest sequence of activities that must be completed on time for the entire project to be finished by its deadline. Identifying this path is crucial because any delay in these activities will directly impact the project completion date. ​ In project manufacturing, this helps in focusing resources and attention on critical tasks to avoid delays. ​
    4. Resource Allocation: CPM allows for detailed resource planning. ​ By knowing the duration and sequence of activities, project managers can allocate labor, machinery, and materials more effectively. This ensures that resources are available when needed and helps in avoiding bottlenecks. ​
    5. Integration with Project Schedule: One of the main benefits of using CPM in project manufacturing is the integration of the manufacturing schedule with the overall project schedule. ​ This integration provides a holistic view of the project, allowing for better coordination and control. ​ Any changes in the project schedule can be quickly reflected in the manufacturing schedule, ensuring alignment. ​
    6. Automating Schedule Creation: CPM can be used to automate the creation of the initial schedule. ​ By converting routing sheets into an activity list in project scheduling software, a detailed, resource-loaded schedule can be generated quickly. ​ This automation reduces the time and effort required for manual scheduling and minimizes errors.
    7. Monitoring and Updating: CPM facilitates regular monitoring and updating of the project schedule. Real-time data from the manufacturing execution system can be integrated to update the schedule with actual start times, durations, and resource usage. ​ This helps in tracking progress and making necessary adjustments to stay on schedule. ​
    8. Earned Value Management (EVM): CPM supports the use of EVM by providing a clear framework for measuring project performance. ​ By comparing actual progress against the baseline schedule, project managers can identify variances and take corrective actions. ​ This is particularly useful in project manufacturing, where efficiency and adherence to schedule are critical. ​

    Example of CPM in Project Manufacturing

    Consider the production of a spacecraft component, such as a propulsion system. The manufacturing process involves several operations, including machining, assembly, and testing. Using CPM, the project manager can:

    • Define each operation as an activity with specific durations and resource requirements. ​
    • Sequence the activities based on dependencies (e.g., machining must be completed before assembly). ​
    • Identify the critical path, which might include the longest and most resource-intensive operations.
    • Allocate resources to ensure that critical activities are prioritized. ​
    • Integrate the manufacturing schedule with the overall project schedule to ensure alignment. ​
    • Monitor progress and update the schedule based on real-time data from the shop floor. ​

    By applying CPM, the project manager can ensure that the propulsion system is manufactured efficiently and on time, contributing to the overall success of the spacecraft project.

    In summary, the Critical Path Method provides a structured approach to planning, scheduling, and managing project manufacturing. ​ It helps in defining activities, sequencing them, identifying the critical path, allocating resources, integrating schedules, and monitoring progress, ultimately leading to more efficient and effective project execution. ​