1Jan

Fmg 5.6.6 Upgrade Guide

1 Jan 2000admin
Fmg 5.6.6 Upgrade Guide Rating: 4,1/5 1402 votes

This guide has been adopted around the world as a successful path to implement FRMS. The FRMS approach to fatigue management relies heavily on continuous improvement and is the principle which guided the development of this, the second edition of the FRMS Implementation Guide for Operators.

Upgrading an ADOMTo upgrade an ADOM, you must be logged in as a super user administrator.An ADOM can only be upgraded after all the devices within the ADOM have been upgraded. See for more information.To upgrade an ADOM:. Go to System Settings All ADOMs. Right-click on an ADOM and select Upgrade, or select an ADOM and then select More Upgrade from the toolbar.If the ADOM has already been upgraded to the latest version, this option will not be available.

Select OK in the confirmation dialog box to upgrade the device.If all of the devices within the ADOM are not already upgraded, the upgrade will be aborted and an error message will be shown. Upgrade the remaining devices within the ADOM, then return to step 1 to try upgrading the ADOM again.

To insert a special character into a gedit file, choose Applications → Accessories → Character Map from the main menu bar. Next, choose Search → Find from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button.

Now switch back to your document and choose Edit → Paste from the gedit menu bar. An audio interface is a hardware device that provides a connection between your computer and audio equipment, including microphones and speakers. Audio interfaces usually convert audio signals between analog and digital formats: signals entering the computer are passed through an analog-to-digital convertor, and signals leaving the computer are passed through a digital-to-analog convertor.

Some audio interfaces have digital input and output ports, which means that other devices perform the conversion between analog and digital signal formats. The conversion between analog and digital audio signal formats is the primary function of audio interfaces.

Real sound has an infinite range of pitch, volume, and durational possibilities. Computers cannot process infinite information, and require sound to be converted to a digital format.

Digital sound signals have a limited range of pitch, volume, and durational possibilities. High-quality analog-to-digital and digital-to-analog convertors change the signal format in a way that keeps the original, analog signal as closely as possible. These quality of the convertors is very important in determining the quality of an audio interface. Musical Instrument Digital Interface (MIDI) is a standard used to control digital musical devices. Many people associate the term with low-quality imitations of acoustic instruments.

This is unfortunate, because MIDI signals themselves do not have a sound. MIDI signals are instructions to control devices: they tell a synthesizer when to start and stop a note, how long the note should be, and what pitch it should have. The synthesizer follows these instructions and creates an audio signal. Many MIDI-controlled synthesizers are low-quality imitations of acoustic instruments, but many are high-quality imitations. MIDI-powered devices are used in many mainstream and non-mainstream musical situations, and can be nearly indistinguishable from actual acoustic instruments.

MIDI interfaces only transmit MIDI signals, not audio signals. Some audio interfaces have built-in MIDI interfaces, allowing both interfaces to share the same physical device. The connection type is only one of the considerations when choosing a sound card. If you have a desktop computer, and you will not be using a notebook or netbook computer for audio, you should consider an internal PCI or PCI-Express connection.

If you want an external sound card, you should consider a FireWire connection. If FireWire-connected sound cards are more too expensive, you should consider a USB connection. The connection type is not the most important consideration when choosing a sound card. The subjective quality of the analog-to-digital and digital-to-analog convertors is the most important consideration.

The conversion between analog and digital signals distinguishes low-quality and high-quality audio interfaces. The sample rate and sample format control the amount of audio information that is stored by the computer. The greater the amount of information stored, the better the audio interface can approximate the original signal from the microphone.

The possible sample rates and sample formats only partially determine the quality of the sound captured or produced by an audio interface. For example, an audio interface integrated into a motherboard may be capable of a 24-bit sample format and 192 kHz sample rate, but a professional-level, FireWire-connected audio interface capable of a 16-bit sample format and 44.1 kHz sample rate may sound better.

There are many different ways to monitor and adjust the level of an audio signal, and there is no widely-agreed practice. One reason for this situation is the technical limitations of recorded audio. Most level meters are designed so that the average level is -6 dB on the meter, and the maximum level is 0 dB. This practice was developed for analog audio. We recommend using an external meter and the 'K-system,' described in a link below. The K-system for level metering was developed for digital audio.

Panning adjusts the portion of a channel's signal that is sent to each output channel. In a stereophonic (two-channel) setup, the two channels represent the 'left' and the 'right' speakers.

Two channels of recorded audio are available in the DAW, and the default setup sends all of the 'left' recorded channel to the 'left' output channel, and all of the 'right' recorded channel to the 'right' output channel. Panning sends some of the left recorded channel's level to the right output channel, or some of the right recorded channel's level to the left output channel. Each recorded channel has a constant total output level, which is divided between the two output channels. The default setup for a left recorded channel is for 'full left' panning, meaning that 100% of the output level is output to the left output channel. An audio engineer might adjust this so that 80% of the recorded channel's level is output to the left output channel, and 20% of the level is output to the right output channel.

An audio engineer might make the left recorded channel sound like it is in front of the listener by setting the panner to 'center,' meaning that 50% of the output level is output to both the left and right output channels. Balance is sometimes confused with panning, even on commercially-available audio equipment. Adjusting the balance changes the volume level of the output channels, without redirecting the recorded signal. The default setting for balance is 'center,' meaning 0% change to the volume level. As you adjust the dial from 'center' toward the 'full left' setting, the volume level of the right output channel is decreased, and the volume level of the left output channel remains constant. As you adjust the dial from 'center' toward the 'full right' setting, the volume level of the left output channel is decreased, and the volume level of the right output channel remains constant.

If you set the dial to '20% left,' the audio equipment would reduce the volume level of the right output channel by 20%, increasing the perceived loudness of the left output channel by approximately 20%. You should adjust the balance so that you perceive both speakers as equally loud. Balance compensates for poorly set up listening environments, where the speakers are not equal distances from the listener.

If the left speaker is closer to you than the right speaker, you can adjust the balance to the right, which decreases the volume level of the left speaker. This is not an ideal solution, but sometimes it is impossible or impractical to set up your speakers correctly. You should adjust the balance only at final playback. Routing audio transmits a signal from one place to another — between applications, between parts of applications, or between devices. On Linux systems, the JACK Audio Connection Kit is used for audio routing. JACK-aware applications (and PulseAudio ones, if so configured) provide inputs and outputs to the JACK server, depending on their configuration.

The QjackCtl application can adjust the default connections. You can easily reroute the output of a program like FluidSynth so that it can be recorded by Ardour, for example, by using QjackCtl. There is a growing trend toward five- and seven-channel audio, driven primarily by 'surround-sound' movies, and not widely available for music. Two 'surround-sound' formats exist for music: DVD Audio (DVD-A) and Super Audio CD (SACD). The development of these formats, and the devices to use them, is held back by the proliferation of headphones with personal MP3 players, a general lack of desire for improvement in audio quality amongst consumers, and the copy-protection measures put in place by record labels. The result is that, while some consumers are willing to pay higher prices for DVD-A or SACD recordings, only a small number of recordings are available. Even if you buy a DVD-A or SACD-capable player, you would need to replace all of your audio equipment with models that support proprietary copy-protection software.

Without this equipment, the player is often forbidden from outputting audio with a higher sample rate or sample format than a conventional audio CD. None of these factors, unfortunately, seem like they will change in the near future. One of the techniques consistently used in computer science is abstraction. Abstraction is the process of creating a generic model for something (or some things) that are actually unique. The 'driver' for a hardware device in a computer is one form of dealing with abstraction: the computer's software interacts with all sound cards in a similar way, and it is the driver which translates the universal instructions given by the software into specific instructions for operating that hardware device.

Consider this real-world comparison: you know how to operate doors because of abstracted instructions. You don't know how to open and close every door that exists, but from the ones that you do know how to operate, your brain automatically creates abstracted instructions, like 'turn the handle,' and 'push the door,' which apply with all or most doors. When you see a new door, you have certain expectations about how it works, based on the abstract behaviour of doors, and you quickly figure out how to operate that specific door with a simple visual inspection. The principle is the same with computer hardware drivers: since the computer already knows how to operate 'sound cards,' it just needs a few simple instructions (the driver) in order to know how to operate any particular sound card.

In Linux, the core of the operating system provides hardware drivers for most audio hardware. The hardware drivers, and the instructions that other software can use to connect to those drivers, are collectively called 'ALSA,' which stands for 'Advanced Linux Sound Architecture.' ALSA is the most direct way that software applications can interact with audio and MIDI hardware, and it used to be the most common way. However, in order to include all of the features that a software application might want to use, ALSA is quite complex, and can be error-prone.

For this and many other reasons, another level of abstraction is normally used, and this makes it easier for software applications to take advantage of the features they need. PulseAudio is an advanced sound server, intended to make audio programming in Linux operating systems as easy as possible. The idea behind its design is that an audio application needs only to output audio to PulseAudio, and PulseAudio will take care of the rest: choosing and controlling a particular device, adjusting the volume, working with other applications, and so on. PulseAudio even has the ability to use 'networked sound,' which allows two computers using PulseAudio to communicate as though they were one computer - either computer can input from or output to either computer's audio hardware just as easily as its own audio hardware. This is all controlled within PulseAudio, so no further complication is added to the software. The JACK sound server offers fewer features than other sound servers, but they are tailor-made to allow the functionality required by audio creation applications. JACK also makes it easier for users to configure the options that are most important for such situations.

The server supports only one sample rate and format at a time, and allows applications and hardware to easily connect and multiplex in ways that other sound servers do not (see for information about routing and multiplexing). It is also optimized to run with consistently low latencies. Although using JACK requires a better understanding of the underlying hardware, the QjackCtl application provides a graphical user interface to ease the process. Phonon is a sound server built into the KDE Software Compilation, and is one of the core components of KDE. By default on Fedora Linux, Phonon feeds output to PulseAudio, but on other platforms (like Mac OS X, Windows, other versions of Linux, FreeBSD, and any other system that supports KDE), Phonon can be configured to feed its output anywhere.

This is its greatest strength - that KDE applications like Amarok and Dragon Player need only be programmed to use Phonon, and they can rely on Phonon to take care of everything else. As KDE applications increasingly find their place in Windows and especially Mac OS X, this cross-platform capability is turning out to be very useful. For periodic tasks, like processing audio (which has a consistently recurring amount of data per second), low latency is desirable, but consistent latency is usually more important.

Think of it like this: years ago in North America, milk was delivered to homes by a dedicated delivery person. Imagine if the milk delivery person had a medium-latency, but consistent schedule, returning every seven days. You would be able to plan for how much milk to buy, and to limit your intake so that you don't run out too soon. Now imagine if the milk delivery person had a low-latency, but inconsistent schedule, returning every one to four days. You would never be sure how much milk to buy, and you wouldn't know how to limit yourself. Sometimes there would be too much milk, and sometimes you would run out.

Audio-processing and synthesis software behaves in a similar way: if it has a consistent amount of latency, it can plan accordingly. If it has an inconsistent amount of latency - whether large or small - there will sometimes be too much data, and sometimes not enough. If your application runs out of audio data, there will be noise or silence in the audio signal - both bad things.

If you've ever opened the 'System Monitor' application, you will probably have noticed that there are a lot of 'processes' running all the time. Some of these processes need the processor, and some of them are just waiting around for something to happen. To help increase the number of processes that can run at the same time, many modern CPUs have more than one 'core,' which allows for more processes to be evaluated at the same time. Even with these improvements, there are usually more processes than available cores: my computer right now has 196 processes and only three cores. There has to be a way of decided which process gets to run and when, and this task is left to the operating system.

In Linux systems like Fedora Linux, the core of the operating system (called the kernel) is responsible for deciding which process gets to execute at what time. This responsibility is called 'scheduling.'

Scheduling access to the processor is called, processor scheduling. The kernel also manages scheduling for a number of other things, like memory access, video hardware access, audio hardware access, hard drive access, and so on. The algorithm (procedure) used for each of these scheduling tasks is different for each, and can be changed depending on the user's needs and the specific hardware being used.

In a hard drive, for example, it makes sense to consider the physical location of data on a disk before deciding which process gets to read first. For a processor this is irrelevant, but there are many other things to consider. There are a number of scheduling algorithms that are available with the standard Linux kernel, and for most uses, a 'fair queueing' system is appropriate. This helps to ensure that all processes get an equal amount of time with the processor, and it's unacceptable for audio work. If you're recording a live concert, and the 'PackageKit' update manager starts, you don't care if PackageKit gets a fair share of processing time - it's more important that the audio is recorded as accurately as possible.

For that matter, if you're recording a live concert, and your computer isn't fast enough to update the monitor, keyboard, and mouse position while providing uninterrupted, high-quality audio, you want the audio instead of the monitor, keyboard, and mouse. After all, once you've missed even the smallest portion of audio, it's gone for good! The default behaviour of a real-time kernel is still to use the 'fair queueing' system by default.

This is good, because most processes don't need to have consistently low latencies. Only specific processes are designed to request high-priority scheduling. Each process is given (or asks for) a priority number, and the real-time kernel will always give processing time to the process with the highest priority number, even if that process uses up all of the available processing time.

This puts regular applications at a disadvantage: when a high-priority process is running, the rest of the system may be unable to function properly. In extreme (and very rare!) cases, a real-time process can encounter an error, use up all the processing time, and disallow any other process from running - effectively locking you out of your computer. Security measures have been taken to help ensure this doesn't happen, but as with anything, there is no guarantee. If you use a real-time kernel, you are exposing yourself to a slightly higher risk of system crashes. In Fedora Linux, the real-time kernel is provided by the Planet CCRMA at Home software repositories.

Along with the warnings in the Planet CCRMA at Home chapter (see ), here is one more to consider: the real-time kernel is used by fewer people than the standard kernel, so it is less well-tested. The chances of something going wrong are relatively low, but be aware that using a real-time kernel increases the level of risk. Always leave a non-real-time option available, in case the real-time kernel stops working. As stated on the project's home page, it is the goal of Planet CCRMA at Home to provide packages which will transform a Fedora Linux-based computer into an audio workstation. What this means is that, while the Fedora Project does an excellent job of providing a general-purpose operating system, a general purpose operating system is insufficient for audio work of the highest quality. The contributors to Planet CCRMA at Home provide software packages which can tune your system specifically for audio work. Planet CCRMA is intended for specialized audio workstations.

The software is packaged in such a way that creates potential (and unknown) security threats caused by the optimizations necessary to prepare a computer system for use in audio work. Furthermore, these optimizations may reveal software bugs present in non-Planet CCRMA software, and allow them to do more damage than on a non-optimized system.

Finally, a computer system's 'stability' (its ability to run without trouble) may be compromised by audio optimizations. Regular desktop applications may perform less well on audio-optimized systems, if the optimization process unintentionally un-optimized some other process. CCRMA is not a large, Linux-focussed organization.

It is an academic organization, and its primary intention with the Planet CCRMA at Home repository is to allow anybody with a computer to do the same kind of work that they do. The Fedora Project is a relatively large organization, backed by one of the world's largest commercial Linux providers, which is focussed on creating a stable and secure operating system for daily use. Furthermore, thousands of people around the world are working for the Fedora Project or its corporate sponsor, and it is their responsibility to proactively solve problems. CCRMA has the same responsibility, but they do not have the dedicated resources of the Fedora Project, it would be naive(???) to think that they would be capable of providing the same level of support. If you want to use your computer for both day-to-day desktop tasks and high-quality audio production, one good solution is to 'dual-boot' your computer. This involves installing Fedora Linux twice on the same physical computer, but it will allow you to keep an entirely separate operating system environment for the Planet CCRMA at Home software.

Not only will this allow you to safely and securely run Planet CCRMA applications in their most-optimized state, but you can help to further optimize your system by turning off and even removing some system services that you do not need for audio work. For example, a GNOME or KDE user might choose to install only 'Openbox' for their audio-optimized installation. This is optional, and recommended only for advanced users. Yum normally installs the latest version of a package, regardless of which repository provides it. Using this plugin will change this behaviour, so that yum will choose package versions primarily based on which repository provides it. If a newer version is available at a repository with lower priority, yum does not upgrade the package. If you simply wish to prevent a particular package from being updated, the instructions in are better-suited to your needs.

The term Digital Audio Workstation (henceforth DAW) refers to the entire hardware and software setup used for professional (or professional-quality) audio recording, manipulation, synthesis, and production. It originally referred to devices purpose-built for the task, but as personal computers have become more powerful and wide-spread, certain specially-designed personal computers can also be thought of as DAWs. The software running on these computers, especially software capable of multi-track recording, playback, and synthesis, is simply called 'DAW software,' which is often shortened to 'DAW.' So, the term 'DAW' and its usage are moderately ambiguous, but generally refer to one of the things mentioned. Recording is the process of capturing audio regions (also called 'clips' or 'segments') into the DAW software, for later processing.

Recording is a complex process, involving a microphone that captures sound energy, translates it into electrical energy, and transmits it to an audio interface. The audio interface converts the electrical energy into digital signals, and sends it through the operating system to the DAW software. The DAW stores regions in memory and on the hard drive as required.

Every time the musicians perform some (or all) of the performance to be recorded, while the DAW is recording, it is considered to be a take. A successful recording usually requires several takes, due to the inconsistencies of musical performance and of the related technological aspects.

Mastering is the process through which a version of the final mix is prepared for distribution and listening. Mastering can be performed for many target formats, including CD, tape, SuperAudio CD, or hard drive. Mastering often involves a reduction in the information available in an audio file: audio CDs are commonly recorded with 20- or 24-bit samples, for example, and reduced to 16-bit samples during mastering. While most physical formats (like CDs) also specify the audio signal's format, audio recordings mastered to hard drive can take on many formats, including OGG, FLAC, AIFF, MP3, and many others. This allows the person doing the mastering some flexibility in choosing the quality and file size of the resulting audio. A track represents one channel, or a predetermined collection of simultaneous, inseparable channels (as is often the case with stereo audio). In the DAW's main window, tracks are usually represented as rows, whereas time is represented by columns.

A track may hold multiple regions, but usually only one of those regions can be heard at a time. The multitrack capability of modern software-based DAWs is one of the reasons for their success. Although each individual track can play only one region at a time, the use of multiple tracks allows the DAW's outputted audio to contain a virtually unlimited number of simultaneous regions. The most powerful aspect of this is that audio does not have to be recorded simultaneously in order to be played back simultaneously; you could sing a duet with yourself, for example. Region, clip, and segment are synonyms: different software uses a different word to refer to the same thing. A region (or clip or segment) is the portion of audio recorded into one track during one take.

Regions are represented in the main DAW interface window as a rectangle, usually coloured, and always contained in only one track. Regions containing audio signal data usually display a spectrographic representation of that data. Regions containing MIDI signal data usually displayed as matrix-based representation of that data. The transport is responsible for managing the current time in a session, and with it the playhead. The playhead marks the point on the timeline from where audio would be played, or to where audio would be recorded. The transport controls the playhead, and whether it is set for recording or only playback. The transport can move the playhead forward or backward, in slow motion, fast motion, or real time.

In most computer-based DAWs, the playhead can also be moved with the cursor. The playhead is represented on the DAW interface as a vertical line through all tracks. The transport's buttons and displays are usually located in a toolbar at the top of the DAW window, but some people prefer to have the transport controls detached from the main interface, and this is how they appear by default in Rosegarden. Automation of the DAW sounds like it might be an advanced topic, or something used to replace decisions made by a human. This is absolutely not the case - automation allows the user to automatically make the same adjustments every time a session is played. This is superior to manual-only control because it allows very precise, gradual, and consistent adjustments, because it relieves you of having to remember the adjustments, and because it allows many more adjustments to be made simultaneously than you could make manually.

The reality is that automation allows super-human control of a session. Most settings can be adjusted by means of automation; the most common are the fader and the panner. The most common method of automating a setting is with a two-dimensional graph called an envelope, which is drawn on top of an audio track, or underneath it in an automation track. The user adds adjustment points by adding and moving points on the graph. This method allows for complex, gradual changes of the setting, as well as simple, one-time changes.

Automation is often controlled by means of MIDI signals, for both audio and MIDI tracks. This allows for external devices to adjust settings in the DAW, and vice-versa - you can actually automate your own hardware from within a software-based DAW!

Of course, not all hardware supports this, so refer to your device's user manual. The clock shows the current place in the file, as indicated by the transport. In, you can see that the transport is at the beginning of the session, so the clock indicates 0.

This clock is configured to show time in minutes and seconds, so it is a time clock. Other possible settings for clocks are to show BBT (bars, beats, and ticks — a MIDI clock), samples (a sample clock), or an SMPTE timecode (used for high-precision synchronization, usually with video — a timecode clock). Some DAWs allow the use of multiple clocks simultaneously.

A technique often used for studio recordings is to separately record parts that would normally be played together, and which will later be made to sound together (see the 'Prepearing a Session' section, below). For example, consider a recording where one trumpeter wants to record both parts of a solo written for two trumpets. The orchestra could be brought into the studio, and would play the entire solo piece without any trumpet solo. Ardour will record this on one track. Then, the trumpet soloist goes to the studio, and uses Ardour to simultaneously listen to the previously-recorded orchestra track while playing one of the solo trumpet parts, which is recorded onto another track. The next day, the trumpeter returns to the studio, and uses Ardour to listen to the previously-recorded orchestra track and previously-recorded solo trumpet part while playing the other solo trumpet part, which is recorded onto a third track. The recording engineer uses Audacity's mixing and editing features to make it sound as though the trumpeter played both solo parts at the same time, while the orchestra was there.

The interface is pretty much the same & it's actively maintained. The correct answer is uTorrent v2.2.1 (build 25302), but I'm gonna tell it to you straight. 2.2.1 is lacking in some really basic features that I really wish I had at this point, i.e. Just use qBittorrent. Utorrent 1.8.9 build 40421 ads free for mac pc.

The program used to record these tracks was configured to record onto a separate track for the left and right channels, so Ardour will also have to be configured this way. It requires more setup, more memory, and more processing power, but it offers greater control over the stereo image and level balancing. We will use one track for vocals, clarinet, and strings, and two tracks for the marimba. This needs to be doubled to handle the stereo audio, so a total of ten tracks are needed. It might still be useful to manipulate the stereo tracks together, so we're going to combine them with five busses. This gives us the option of modifying both stereo channels or just one - you'll see how it works as the tutorial progresses.

All of these actions take place within Ardour. You guessed it though - there's more to it than that, and it mostly has to do with the setup of this particular file. You will notice that the region list has many similarly-named regions, and that most of the names correspond to particular tracks and a bus. The files are named so that you know what's on them. They are given a number so that you know the sequence in which they're to be added ('Marimba1' regions before 'Marimba2'), and a letter 'L' or 'R' at the end to signify whether the region is the left or the right channel. Furthermore, the regions that start with 'ens-' belong on the 'voice' tracks ('ens' is short for 'ensemble,' meaning that those regions contain a small vocal ensemble, whereas the 'Voice. ' regions contain just one singer).

The 'HereIsHow' regions belong before the 'CreatetheInconceivable' regions. Remember: there is no technical reason that the regions are named as they are. The names are there to help you edit and mix the song.

We don't need to use the 'marimba2' tracks or bus yet, so just add all of the 'Marimba' regions to the 'marimba1' tracks. Notice that when you made the first adjustment, Ardour put an arrow beside the region name in the region list of the session sidebar. If you click on the arrow, you will see that there is another copy of the same region underneath, but it's white. Ardour wants you to know that the white-coloured region is a modification of the blue-coloured region. If you drag the white-coloured region into the canvas area, you'll notice that it starts at the same time as the region you just modified. It can also be dragged out to the full size of the original region, which would create another modified version of the original. While it seems like Ardour stores multiple copies of the region, it actually just stores one copy, and the information required to make it seem like there are many.

Part of the power of recording with a DAW is that the same material can be capture multiple times. Mixing and matching like this allows us to seek the 'perfect' performance of a piece of music.

A few of the regions in this file are multiple takes of the same material. There are a few occasions where we can definitively say that one is better than the other, and there are a few occasions where it depends on your personal taste.

This section covers techniques that can be used to further cut up the audio, in this case with the end goal of comparing and choosing preferred sections. Not all choices will be made yet. Throughout this section, you will need to move un-placed regions out of the way, farter down the session, so that they don't interfere with the alignment process. Remember to lock the regions once you put them in place. They can be unlocked and re-aligned later, if you choose. Finally, it will help if you place a marker (like the 'marimba-start' marker that we placed earlier) where each region will start. When you place a marker, you can click on it, and move the blue place-marker line.

This will help you to align the start of sound in a region to the place where you want it to be. Finally, it should be noted that, moreso than in the editing stage, the mixing stage should not be understood as progressing in a linear manner.

This means you should not be following the tutorial from start to finish, but jumping between sections are desired. You should set up the tracks for stereo output first, and then read through all the sections and follow their advice as you wish, but sometimes returning to previous activities to re-tune those settings. When one setting is changed, it tends to have an effect on other settings, so if you set the level of a track once, then change its panning, you should check that the levels you set are still desirable - they'll probably need some tweaking, however minor it may be.

Part of the reason that the session sounds so bad is that all of the audio has been routed through both the left and right channels equally, making it a 'mono' recording, even though we have the material of a 'stereo' recording. This could easily have been done sooner, but it wouldn't have made much of a difference until now. Whereas mixing was focussed on getting the regions assembled so that they are like the song, mixing is about fine-tuning the regions and tracks so that they make the song sound great. Setting up the initial panning takes quite a bit more thought than setting the initial levels. Different music will have different requirements, but the main purpose of adjusting the panning for this sort of recorded acoustic music is to ensure that each performer has a unique and unchanging position in the stereo image. When humans are listening to music, they implicitly ascribe a 'location' to the sound - where their brain thinks it should be coming from.

When listening to recorded music, we understand that the sound is actually coming from speakers or a set of headphones, and that the performers are not actually there. Even so, it can be difficult, tiring, and unpleasant to listen to music where the imagined position of a performer or sound is constantly changing - just as it's difficult and tiring to listen to music which is has poorly balanced levels.

As if it weren't already difficult enough, the stereo image is created in our minds as a complex combination of many factors: quieter sounds and later sounds seem to be farther away than louder and earlier sounds. Although the DAW's panner can only put the signal somewhere in a straight line between 'all the way left' and 'all the way right,' our brains process sound as existing in a three-dimensional world. A master audio engineer will be able to control these factors with relative ease, but for us it's going to involve much more trial and error. A particular obstacle with this session is that the regions with the soloist put her in a different imagined position than the regions where the soloist is singing with other singers. Because these happen in the same tracks, we'll use automated panner and fader tracks to help solve this problem.

Listen for yourself: start at about 00:02:40.000, and pay attention to where the soloist seems to be standing in the 'Voice4' regions and the 'ens-CreatetheInconceivable' regions. It seems to me like she moves from nearby on the right to a farther distance just to the left; somehow without bumping into the other people in the vocal ensemble, or the strings, which also seem to be in the way! You might argue that most listeners would not pick this up, and that's probably the case.

Even so, I would counter that the drastic change of level and panning would be passively detected by those same people, even if they only consciously perceive it as being 'not quite right.' I chose that particular layout because it requires relatively minimal adjustment, and it makes a certain amount of sense in terms of traditional instrumental ensemble seating patterns. Also, the notes played by the clarinet in this song seem suitable to appear as if from far away, and the passages are played with good expression, so I think it will be relatively easy for me to acheive that effect. The most important consideration was the placement of the vocal ensemble and the solo vocalist within it. Although the solo vocalist sings the highest part in the ensemble ('soprano'), the stereo recording seems to indicate that she was not standing at the left-most position in the ensemble (I also know this because I was present during the recording). This adds an extra difficulty, in that the fader and panner settings for the whole voice track must be based on the moment in the 'ens-CreatetheInconceivable' region where the second-highest singer ('alto') sings just after the highest singer, who is the soloist. Make rought adjustments to most of the tracks, to place them in approximately the right space in the stereo image.

You may wish to adjust an individual track's panner setting, in addition to the busses' panner settings; they will have a slightly different effect. For the marimba tracks, you may wish to fine-tune things now, adjusting the fader settings.

Because these tracks are so consistent, they will require relatively little automation, and therefore will benefit more from a more thorough initial set-up procedure. Remember that it's better to be turning down the fader than turning it up! So far, we've been crudely adjusting the fader and panner settings manually. This won't work if you want to change the settings while a session is playing; you would have to change all of the settings by yourself, every time you play the session. This quickly becomes complicated - not to mention difficult to remember. 'Automation' allows effects (like the panner and fader) to be moved automatically during session playback. An automation track is simply a track that contains no audio, but rather instructions to adjust a particular effect.

Automation tracks usually resemble audio tracks, but they hold lines and points, to show the settings changes. Automation tracks can, in effect, be 'recorded,' but we're going to use a more basic editing method.

Automation tracks can be assigned to busses and tracks. By default, Ardour will export all audio in the range or session being exported.

What it actually exports is all audio routed through the master output bus. You can see the list of tracks to export on the right side of the 'Export' window. If you click the 'Specific Tracks' button, you will be able to choose from a list of all the tracks and busses in a session. Choosing specific tracks only makes sense if you do not want to export the master bus' output, so you should probably de-select that first.

FLAC: An open-source compressed format. A 'lossless' format, meaning no audio information is lost during compression and decompression. Audio quality is equal to WAV or AIFF formats. Capable of carrying metadata, so information like title, artist, and composer will be preserved.

Widely supported in Linux by default. For other popular operating systems, refer to Download Extras (FLAC Website) at for a list of applications and programs capable of playing FLAC files. This is usually the best choice for distributing high-quality audio to listeners. Ogg/Vorbis: An open-source compressed format.

A 'lossy' format, meaning some audio information is lost during compression and decompression. Audio quality is less than WAV or AIFF formats, but usually better than MP3. Capable of carrying metadata, so information like title, artist, and composer will be preserved. Widely supported in Linux by default. For other popular operating systems, following the instructions on the Vorbis Website. This is a good choice for distributing good-quality audio to listeners.

But Qtractor is much more than just a starting-point: its simplicity is its greatest strength. Ardour and Rosegarden, may offer more features, but Qtractor takes much less time to learn. After the initial learning-curve, you will be able to complete almost every audio or MIDI project with Qtractor. Its interface offers simple, intuitive, point-and-click interaction with clips, integrated control of JACK connections, MIDI control integration with external devices and other MIDI-aware software, and support for LADSPA, DSSI, native VSTi, and LV2 plug-ins.

With development progressing very quickly, Qtractor is becoming more stable and usable by the minute. The simple interface allows you to focus on creating music to suit your creative needs. Qtractor is not available from the Fedora software repositories.

Qtractor is available from the 'Planet CCRMA at Home' and 'RPM Fusion' repositories. If you have already enabled one of those repositories, you should install Qtractor from that repository. If you have not already enabled one of those repositories, we recommend that you install Qtractor from the 'Planet CCRMA at Home' repository. See for instructions to enable the 'Planet CCRMA at Home' repository.

The 'Planet CCRMA at Home' repository contains a wide variety of music and audio applications. The 'Capture/Export' setting allows you to choose the format in which Qtractor stores its audio clips when recorded or exported. You will be able to choose a file type, such as 'WAV Microsoft' for standard '.wav' files, 'AIFF Apple-SGI' for standard '.aiff' files, or the preferable 'FLAC Lossless Audio Codec,' format. FLAC is an open-source, lossless, compressed format for storing audio signals and metadata. See the FLAC Website for more information.

You will also be asked to select a quality setting for lossy compressed formats, or a sample format for all lossless formats. If you do not know which sample format to choose, then 'Signed 16-Bit' is a good choice for almost all uses, and will provide you with CD-quality audio.

Most non-speciality hardware is incapable of making good use of higher sample formats. See for more information about sample formats. Duplex: will have Qtractor follow incoming MMC instructions, and provide outgoing MMC messagesYou can also select a particular MIDI device number with which Qtractor will interact; if you do this, it will ignore MMC messages from other devices, and not send MMC messages to other devices. Enabled the 'Dedicated MIDI control input/output' will provide JACK with MIDI inputs and outputs that will be used by Qtractor only for MMC messages. Qtractor will not send or receive MMC messages sent on other inputs or outputs if this option is enabled. 'SPP' stands for 'Song Position Pointer,' and helps MIDI-connected applications to keep track of the current location in a session (in other words, where the transport is).

This should probably be set to the same setting as 'MMC.' If you don't know which of these settings to use, then setting 'MMC' to 'None' is a good choice. This setting can be adjusted at any time, if you later decide to link applications with MMC messages. Randomize: This tool adjusts the selected parameters to pseudo-random values.

Upgrade

The values are only pseudo-random for two reasons: computers cannot produce truly random numbers, only numbers that seem random to humans; the percentage value allows you to specify how widely-varied the results will be. A lower percentage setting will result in MIDI notes that are more similar to the pre-randomized state than if the MIDI notes were randomized with a higher percentage setting.

The following parameters can be randomized. Resize: This tool allows you to explicitly specify the duration or velocity of some MIDI notes.

Setting the 'Value' field will set the velocity (loudness) of all selected notes to that setting. Valid settings are from 0 (quietest) to 127 (loudest). Setting the 'Duration' field will set the duration (length) of all selected notes to that setting.

Duration is most usefully measured as 'BBT' (meaning 'Bars, Beats, and Ticks' - each is separated by a decimal point), but can also be measured as time or frames. Qtractor can export all of a session's audio clips as one audio file, but it cannot export the MIDI clips directly into that audio file. This is because Qtractor does not synthesize audio from MIDI signals, but uses an external MIDI synthesizer to do this. Thankfully, there is a relatively simple way to overcome this, allowing both audio and MIDI to be exported in the same audio file: use Qtractor to record the audio signal produced by the MIDI synthesizer. This procedure only works if you use a MIDI synthesizer (like FluidSynth) which outputs its audio signal to JACK. Another interesting aspect of this piece is that, unless you have access to the same audio recording that I used, you will not be able to experience the piece as I do. Playing the MIDI alone gives a completely different experience, and it is one that I knew would happen.

This sort of 'mix-and-match' approach to music-listening is more common than you might think, but rarely is it done in such an active way; normally, the 'extra sound' of listening to music is provided by traffic, machines like furnaces and microwave ovens, and even people in a concert hall or auditorium with you. The fact that my audio files cannot be legally re-distributed forced me to add a conscious creative decision into every listening of the piece. This corresponds to the 'first variation' in the audio file. Since variations are based on the theme, the rest of my sections are all somehow based on my theme section.

Here, I derived inspiration from the music again: there is a note (generally) every three beats like the theme, but I extended it to take up two beats, at the end of which another note briefly sounds. This is like Beethoven's technique in the first variation. Although I ignored them in the theme, there are small transitions between the inner-sections of Beethoven's theme, and I chose to add them into my first variation (you can see it in Qtractor's measure 69). However, what I intended to communicate was this: Beethoven wrote a lot of piano music, much of which is still enjoyed by people today. Nobody will ever be able to re-create the magic of Beethoven, and I feel that it would be silly to try; this is why I let the music sound silly, rather than attempting to make it sound serious. I also feel that taking inspiration from composers such as Beethoven is an excellent way to create new art for ourselves, which is why I am deriving certain cues directly from the music (mostly vague stylistic ones), but ignoring others (like the idea that pitches should be somehow organized).

I used one new technique while composing this section: copy-and-paste within the matrix editor. You can see this around the beginning of measure 103, where the same pitch-classes are heard simultaneously in a high and low octave. I created the upper register first, then selected the notes that I wanted to copy. I used Control-c and Control-v to create the copy.

Like when copy-and-pasting clips in the main window, the cursor icon changes to a clipboard, and an outline of the to-be-pasted material is shown so that you can position it as desired. As you will see, you can paste the copy onto any pitch level, and at any point in the measure. What is kept the same is the pitch intervals between notes and the rhythms between notes. In this passage, I kept the 'a note followed by three beats of rest' idea, then added onto the melody by taking two cues from the audio file. The first was the increasing surface rhythm of the upper part, which gave rise to the 'three-descending-notes' figures. The second was the fact that the chords are still going on underneath that melody, so I added a second randomized set of notes underneath my upper part. At the end of the passage I continued the trend that I started with a finishing flourish that picks up sustained notes.

This part of piece was intended to mirror Beethoven's score quite obviously. The only real bit of trickery that I played was looking at Beethoven's score, and incorporating particular notes: the chord in measure 212 is composed of the same pitches that are used in the chord in the audio file in measure 210. It sounds very different because of the 'real piano vs. MIDI piano' issue, and because the tuning of the piano in the recording is different than the tuning of the MIDI piano. Also, the chord in the second beat of measure 213 is the first chord of the movement following the one recorded in the audio file. By including this (then 'resolving' it, then re-introducing it), I intend to play with the expectations of a listener that may already be familiar with Beethoven's sonata.