On resolution of double stars
After some years of double star observing, it seems
time to resume as compact as possible the current state of my
"insights" regarding visual resolution of double stars with
"small" telescopes. The time spent with the anachronistic passion of
visual double star observing was so far a very interesting one as I learned a
lot and I am still learning. For example it took me some time to realise that
when looking at a star through a telescope I do not see the star but an optical
artefact produced by the telescope with the light coming from the star the so
called diffraction pattern. A fact that we have to keep in mind when trying to
resolve double stars.
First, we have to define the meaning of
"resolution". Calling a double resolved is often understood as clean
split with dark space in between the starl disks (central part of the Airy
disks) of primary and secondary. The required minimum aperture for a clean
split seems theoretically easy to calculate. While the size of the Airy disk
(defined as the radius from peak intensity to first minimum of the diffraction
pattern) is independent from the magnitude of a star, the size of the visible
central disk changes very well with the brightness of the light source. The
central disk of very bright stars might occupy 80% or even a bit more of the
Airy disk while stars with +5mag occupy already less than 50% and somewhat
fainter stars with +9mag only about 25% going down to near zero for the really very
faint stars. Despite some research, I did not find any solid information on the
relative size of the central disk depending on magnitude so I did some
statistical work myself based on images of open clusters and double stars. This
approach might not habe been very precise but to some degree good enough, when
compared with the results of visual observations so far (download spreadsheet
for this calculation size of spurious
disk.xls).
As the angular size of the Airy disk depends on the
diameter of the scope you know now what to look for: The result from adding up
both radii from binary and secondary spurious disk should be smaller than the
separation of the double in arcseconds to show some dark space between the
components in your scope. This is if both components are bright enough to be
resolved as single stars and none of the other influencing factors changes the
rules of the game.
However, you need no clean split to recognize that an
object you observe is obviously not a single star central disks might be
"kissing" or slightly overlap, might look like a dented rod, a rod
without dents going down to an elongation with heavily overlapping central
disks giving the impression of an oval or "egg". To avoid troubles
with different concepts of resolution I opt for the obvious recognition of an
object being definitely not a single star clear enough to allow a confident
estimation of the angular position of the secondary (without doubts, therefore
excluding lucky guesses) as minimum requirement to consider a double as
"resolved".
The number of factors influencing double star
resolution is as already mentioned large and at least some of them are heavily
interacting in the following I will
discuss the in my opinion most important ones:
1. Pysical attributes of the double star:
a. Separation:
The smaller the separation the higher obviously the requirements regarding
aperture with exponential increase. Larger separations pose less challenge up
to none at all with the exception of very faint pairs secondaries of very
wide pairs may be resolved up to near the telescope magnitude limit. When doing
some statistical analysis of mostly own observation reports I found the
relation X/sep (with X as a constant value to be defined) to be a good
foundation for an estimation of the for resolution required aperture depending
on separation. Treating X as parameter determined by the least square method
from my data set of resolved double stars I noticed that Dawes did a good job
the result was constantly ~116. One special point regarding separation for
close pairs is the effect of the position of the secondary in the diffraction
pattern of the primary this will be discussed as separate topic
b. Magnitude
of primary: Too bright poses a problem with glare but this is rather rare so I
see no reason to deal further with this. On the other side increasing faintness
of the primary means an increasing challenge for resolution as is obvious when
considering the simple case of a close equal bright double. Beginning with
magnitudes fainter than +6mag a slow increase of requirements begins up to the
degree of non-resolution near the telescope magnitude limit. Statistical
analysis has also shown a dependence between separation and the difference
between telescope magnitude limit and magnitude of primary working as amplifier
for this relation in simple words: How demanding the increasing magnitude of
the primary gets depends how close we are to the telescope magnitude with
exponential increase depending on separation
c. Magnitude
of secondary: This one is obvious fainter secondaries are harder to resolve
up to the degree of non-resolution near the telescope magnitude limit. To
resolve a companion near the TML a large separation is needed and other
brighter stars in the field of view can make this even more difficult.
Statistical analysis has shown that this effect begins to show with magnitudes
fainter than +9mag and it seems to work a bit surprising rather linear
d. Spectral
class: In average, we expect yellow light with ~550nm wavelength but some stars
show a different spectrum. Wavelength has not only an effect on the size of the
diffraction pattern but some hues especially reddish ones make stars harder to
resolve as they appear visually fainter. The topic gets even more complicated
if we consider doubles with different spectral classes for primary and
secondary. From a statistical point of view it seems sufficient to stick with
the assumption of yellow light and consider variations of colors as "white
noise" - but be aware that a red
hue (especially of the companion) makes resolution significantly harder
e. Relations
between these factors especially the difference between the magnitudes of
primary and secondary (delta_m): Increasing delta_m makes it obviously harder
to resolve doubles. Statistical analysis has shown that this effect starts with
delta_m larger than 1 with strong exponential impact of separation (in the
basic form of delta_m/sep) and some minor side effects depending on the size of
central obstruction: Large central obstruction and large delta_m seem to be not
such a good combination.
2. Used telescope a hot and controversial topic:
a. Aperture:
Defines angular resolution limit and magnitude resolution limit of the scope
usually indicated in the telescope specifications. Both values are to be taken
with caution and are no hard facts but should give an idea what to expect with
some spread under reasonable good conditions. Obviously the larger the aperture
the better the chances for resolving a given double. As the aperture is usually
not a choice but a given it is one of the most important factors for selecting
doubles during session planning. Bright doubles with a separation near the
angular resolution limit are often considered as most interesting to observe.
But also faint and wide pairs near the magnitude resolution limit can offer an
interesting challenge
i. Telescope
angular resolution limit: Depending on aperture and usually given as 116/D_mm
(Dawes) with D_mm for aperture in mm. This is an empirical value derived for
small refractors for equal ~6mag bright pairs. If we accept resolution as
observed distinctive elongation as discussed above, then this value is a bit
conservative. Statistical analysis of several successful elongation observation
reports has shown that the very lowest resolution limit under else very good
conditions might be around 0.5x Rayleigh means 69/D_mm. On the other side if we
demand a clean split then for very bright pairs not even the Rayleigh criterion
with 138/D_mm is sufficient. How close we get to the angular resolution limit
of a scope depends a lot on seeing conditions discussed later on
ii. Telescope
magnitude limit: Depending on aperture and usually given as 2.7+5*LOG10[D_mm] or similar. It seems obvious
that such a formula can only give a very crude hint what might be possible with
this scope under very good conditions but all efforts to provide a more precise approach like especially
Schaefers work (see Schaefer
Telescopic Limiting Magnitude Calculator.html) have only
shown how difficult this is. I have made it custom to start my observing
sessions to find the faintest star I can resolve in the target field of view
and I found here differences of ±1mag depending on seeing conditions. So I know
what to expect for attempts to resolve wide doubles with very faint secondaries
obviously it will then be impossible to resolve fainter than TML secondaries
regardless separation. So far I have not found a precise separation value for
being able to resolve a secondary near or at TML as this depends very much on
given transparency but I think 30" separation is large enough to reduce
the challenge of resolving a double to that of resolving a single star. Another
interesting impression for close faint pairs: While it is often not possible to
get a crisp resolution even with averted view there is often to observe some
shimmer like from a nebula certainly no resolution but a strong hint for
being a double
b. Size
of central obstruction (CO) as fraction of the aperture defining details of the
diffraction pattern, especially the size of the Airy disk and peak intensity of
central disk and diffraction rings. An often hot and controversial topic is how
good or bad the effects of CO might be for resolving doubles. I did several
experiments with different sizes of CO and found the relation between CO and
resolution to be not very intuitive. A small CO of ~0.2 seems even to offer some
advantage for resolving close and rather bright doubles with small delta_m by
reducing the size of the central disk. A large CO >0.35 seems to make
resolution of close and very unequal doubles somewhat harder due to the effect
of a much brighter diffraction pattern adding an additional challenge to
resolve a secondary depending on the position within this pattern. For wide and
faint pairs the size of CO seems not relevant
c. General
optical quality of the scope usually specified either by the producer of by an
optical laboratory in terms of Strehl defining peak intensity. Often Strehl
~0.95 is considered the target line for a scope to be good enough to deliver
images without visually noticeable deterioration. Obvious basic line: Good
optical quality makes resolution at least for difficult cases easier given
everything else equal but usually this is not such a topic as the quality
delivered today is generally very good even for inexpensive scopes
d. Several
special topics regarding reflectors (collimation, themal issues etc.). Basic
line: If you want to get a good performance from your scope for resolving
difficult doubles then it is absolutely necessary to have no problems in this
regard
e. Focal
ratio: I am not sure if this is really an influencing factor but higher focal
ratio scopes are often praised for crisper resolution so far I have not much
experience in this regard but in the long term, I intend to do some comparisons
here. However, once I compared directly the image quality of a 60mm mask
(without CO) on my C925 with 2350mm focal length, a 60mm mask on my refractor
with 980mm focal length and my 60mm travel refractor with 355mm focal length
with the conclusion, that image stability increased strongly with the focal
ratio
f.
Finally there is the question if the resolution limits
of a given scope are a "disadvantage" for observing doubles I think
certainly not because you can use these limitations in some cases to question
given separation and magnitude. If for example you have checked the current TML
of your scope with a specific value in your field of view and you cannot
resolve a wide double with a secondary with an advertised brighter magnitude
then you know you have here something to investigate further
g. Any scope
of reasonable quality is perfectly suited for double star observation within
its limits and the number of suitable objects is huge even for very small
scopes. Nevertheless, there might be one exception: Scopes with a noticeable
image shift when changing the direction of focus (basically all scopes with a
movement of the main mirror to focus). When "zooming" in on a double
star you have to change eyepieces and therefore refocus a little image shift
can then become a bit irritating.
3. Atmospheric influences:
This is the
source of our permanent failure to come near our theoretical resolution limits.
Best location
for visual astronomy would be out in space and second best might be somewhere
up in the mountains above the clouds and with rather dry air, but even then
there is necessarily an impact caused by the layers of air we are looking
through. Atmospheric influences are best checked with the quality of the visual
image of a bright star with a magnification high enough to potentially see the
diffraction pattern range goes from a perfect image with crisp central disk
and stable diffractions rings to snowballs without any structure and in between
are all kinds of fuzzy and jumping images. There exist many extensive
theoretical papers and descriptions of experiments on this topic so I keep this
short. We have four main factors here:
a. Extinction:
The amount of air between observer and space leads to some reduction of the
brightness of a star and depending on the altitude of the star. This effect is
called extinction and has low impact for the resolution of brighter doubles but
some effect on fainter ones near the telescope magnitude resolution limit. This
is one good reason to avoid anything below 35° altitude but the most negative
effects of low altitude are actually general disturbances of the visual image
of the diffraction pattern (called atmospheric dispersion) leading to bad
chances for resolving especially of close doubles or even worse to false
positives
b. Stability
of the air vulgo seeing: The degree of stability of the air is essential for
the stability of the visible diffraction pattern. Several scales exist
describing this effect but words alone are not enough to use these scales with
some precision but animated images do a good job for example Pickering.htm.
Jumpy and utterly destroyed diffraction patterns are obviously not this good
for resolving doubles, especially close ones. However, so-called bad seeing is
often interupted by fractions of seconds of stability sometimes long and
frequent enough to get good results despite "bad seeing" kind of
lucky imaging. So beginning with Pickering ~4-5 we can hope for some good
results. But even if we are not "lucky" - bad seeing is no reason to
waste an otherwise clear sky: Wide doubles offen even under this condition
interesting sessions especially near the magnitude resolution limit of the used
scope
c. Transparency
of the air: High humidity, fog, dust and so on up to some fine sand from the
Sahara produce a fuzzy image of the diffraction pattern up to so-called
snowballs when else visible diffractions rings are joined with the central disk
for a fuzzy ball like a globular cluster. Side effects of low transparency are
a halo around bright primaries hiding faint secondaries and generally making
faint stars stil fainter resulting in a reduction of the telescope magnitude
limit. A tad of low transparency is sometimes combined with good image
stability a not this bad combination allowing often a good session. But more
as a tad of low transparency is really contra productive for resolving close
doubles leading to very frustrating sessions it the focus remains on close
doubles wide doubles with companions up to the then reduced TML might save
the night. So far I have found no satisfying scale describing transparency but
have the impression that the size of halo around a bright star in arcseconds
and the relation of the currently observed telescope magnitude limit to the
scopes specification are in combination a good measurement of transparency
d. Light
pollution: A really dark sky with a naked eye magnitude limit near +6mag is
visually a sensation. The Bortle Dark-Sky Scale is with an upper limit for
black nights of up to +8mag overly optimistic as resolving such faint stars
would require falcon eyes. Anyway, observing double stars has the benefit of
low impact of light pollution especially for brighter pairs up to +9mag even
heavy light pollution adds only a few mm to the for resolution required
aperture. This changes a lot for fainter stars near the telescope magnitude
resolution limit: Together with extinction light pollution might cost up to 10%
or even more of the telescope magnitude resolution limit so this means good bye
to the really faint fuzzies.
Basic line:
Atmospheric influences make the difference between a great double star
observing session with spectacular and exceptional results and a modest one
with mediocre results. Already small changes in conditions may cause a great
spread regarding required aperture for resolving a given double up to non
resolution at all.
4. Quality of available double star data: My
entry point into double star observing was the Cambridge Double Stars Atlas -
interesting but of limited use for detailed session planning. Next was the
catalog of the probably most prominent double star discoverer Struve. It took
me some time to realise that the data of this catalog was for obvious reasons
no longer up to date so finally I landed on the most important data source for
double star observers the Washington Double Star Catalog also available
online. This time it took me not very long to realise that even here the data
has to be taken with care.
The WDS Catalog
data is strictly based on observation reports this is clearly a problem for
pairs with fast but different proper motion for the components but also for
physical pairs with fast orbits. WDS note code "O" indicates the
existence of an entry in the WDS 6th Orbit Catalog, which should provide
ephemerides for the current date - important if the last registered observation
is not a recent one.
Many objects in
the WDS Catalog come with often crude magnitude estimations if magnitudes are
given without digits then a lot of caution is required, to some degree also if
magnitudes are given with only single digit precision as recent measurements
should show triple digit precision. Many objects are given with manitudes
outside the visual band, for example in the blue or red band (WDS note code B
and K). Insufficient quality of magnitude data is not only a problem of the WDS
catalog, the seemingly simple concept of visual magnitudes seems generally
somewhat shaky for stars let's say fainter than +10mag you only have to look
up several catalogs for the same star (Hipparco, Tacho II, USNO, UCAC4 ...) to
get as many different values as there are catalogs.
While the
absolute quality of data might seem not this significant when it comes to the
quest of resolution it is very well important when it comes to session planning
and expectations which aperture might be sufficient. For some time I have
reported obviously wrong WDS data directly to the WDS catalog organisation but
in a rather unsystematic way when it just happened to occur. Later on I decided
to proceed here in a more systematic approach with publishing reports (in JDSO
and DSSC) with visual magnitudes based on processing of images taken with
remote rerminals equipped with V-filter.
5. Position of the secondary in the diffraction pattern
of the primary:
Certainly of
relevance for resolving close and faint companions especially for secondaries
sitting more or less centered on the first diffraction ring can a secondary
equal faint as the first ring be resolved as it might for example provide a
thickening in this position or will it get lost? If resolution with equal
brightness is possible, is it then also possible with a somewhat fainter
companion? If resolution with equal brightness is not possible to what degree
has then the secondary be brighter than the first ring? Next questions come for
the position of the secondary between spurious disk and first ring of the
primary and then outside the first ring and so on
Diffraction
theory does as far as I know not deliver a clear answer in this regard and the
existing criterions like Rayleigh, Dawes, Sparrow et al. claim resolution only
for equal bright components.
One solid
approach to determine the brightness of the first diffraction ring in terms of
visual magnitude is certainly to translate the difference in peak intensity
into difference in magnitudes according to the logarithmic scale of magnitudes.
However, it seems questionable to set peak intensitiy of the spurious disk of a
star equal to the peak intensity of a ring at least when it comes to visual
observation.
My experience in
this regard is so far not very conclusive as there are not this many stars
bright enough to show a clear first diffraction ring with a companion close and
faint enough to compete with the brightness of the ring. Delta Cyg would for
example be a good candidate for such an experiment. I have once resolved Delta
Cyg with an aperture of 70mm thus positioning the companion directly centered
on the first ring but at this time I did not care about this question enough
to investigate further. Anyway, at least this means that a delta_m of ~3.5mag
is no big problem for resolving a companion centered on the first ring.
If interested in
such questions: Download here Position
of Secondary in the Diffraction Pattern of the Primary.xls a
spreadsheet giving the required apertures in mm to have the secondary at
specific positions in the diffraction pattern of the primary. The given values
are valid for refractors but the radius of the first ring remains regardless
CO size nearly unchanged at least for visual observation. So any close double
with a primary bright enough to provide a visible first ring with a delta_m of
3 or better more should provide an interesting target for this topic.
6. Personal attributes of the observer:
Several
individual factors might be relevant if a given double might be resolved or
not:
a. Experience:
The first look through a telescope is often (especially when unguided) rather
disappointing there are all these images in the journals and the internet
with incredible colors and details and then all you see is a grey blotch. Then
the learning curve starts getting slowly a bit steeper, so some experience is
necessary to be able to see details invisible to the beginner. And over time
some more or less individual techniques evolve for handling difficult cases:
Use of averted vision, moving the target purposely through the field of view,
changing between intra-focal and extra-focal, changing magnification back and
forth (with not this good seeing less is often more) ...
b. Human
optical system: The effects of eyes and brain on resolution. Lots of scientific
papers are available on this topic but this is certainly not my field, so I
keep this extra short:
i. Age:
Exit pupil plays an important role and according to Schaefers work on magnitude
limits age is actually an advantage. So complaints like "My old eyes
" are to be considered as fishing for compliments
ii. Personal
acuity: Usually defined by the magnification needed to split a given equal
bright double with given separation for example to split a 2" equal
double with x80 gives a personal acuity of 160"
iii. Ability
to detect minimal different shades of grey in terms of spots against the
background vulgo limiting magnitude of the eye
iv. Ability
to detect faint spots near brighter ones: This is may be the other side of the
same coin less sensibility to glare might be an advantage here.
Other factors: So far we have covered most of the resolution
relevant factors but this list can never be complete it seems.
At least there
is a lot of less common specific conditions: Primary with excessive glare,
multiples with a faint component between two brighter ones, specific
combinations of colors for example blue white for the primary and reddish for
the secondary, doubles near another bright star etc.
Basic line: It
is impossible to consider all factors relevant for resolution so any double
might pose a specific challenge.
Session
planning:
Looking through a telescope without having a plan in
advance what to observe means asking for frustration. You might rely on your
electronic equipment proposing some "best of the night" targets but
just looking at objects however beautiful they might be without knowing some
facts in advance gets boring at least in the long term. As a starter for double
star session planning you might use the many sources available online as most
such links I found over the years are no longer active, I suggest own research.
If you are impressed b< the work of famous double
star discoverers you might work through the complete lists of for example
Struve with 4134 objects or Burnham with 1540 objects and compare your
observations with the current data in the Washington Double Star catalog.
If you are interested in questionable objects in the
WDS Catalog you have the separate catalogs "WDS Neglected Doubles"
available as base for your planning.
Sketching, imaging and measuring double stars might
offer an interesting challenge in addition to visual observation.
These are anly a few possibilities and there are many
more valuable resources available in form of books (Cambridge Double Star Atlas
or Haas' book on Double Stars for Small Telescopes) or a vast number of
websites discussing double stars.
Base line: Detailed session planning is highly
recommended and the WDS Catalog is the most versatile data source to do this. A
side benefit of detailed session planning is constant learning about your sky and
the next best thing to a good observing session is planning the next sessions
during foul weather. And even if the weather does not cooperate and time goes
by without a chance to execute a session plan stars have the nice property of
coming again next season and with few exceptions (fast orbits and new
measurements) double star session plans are valid for years to come.
As seeing conditions (especially seeing and
transparency) are seldom known in advance it seems wise to have alternative
session plans for different conditions available or at least include in
standard plans also objects of interest suitable for not this good seeing
conditions. To some (but really only to some) degree it is like with weather:
There is no such thing as bad seeing conditions, there are only wrong session
plans. Against cloudy nights there is no remedy, but even then fast moving
clouds may offer spectacular and dramatic sights of the moon with a small scope
or binoculars.
How to do session planning is as indicated above certainly
highly individual depending on available equipment and tools and above all
special interests and agendas. In my opinion, some agenda is needed to keep
long-term interest alive but that might be just a personal attitude.
Rule of Thumb
(RoT) for an proposed aperture for resolving doubles:
Many attempts have been made to get a grasp on
calculating the needed aperture for resolving a given double star (or the other
way around the minimum separation resolvable with a given aperture) starting
with criterions for equal bright stars like from Dawes or Rayleigh. While
Rayleigh's suggestion is based on optical theory (usually given for yellow
light to eliminate the question of spectral type) the Dawes' criterion is based
on an average value derived from a set of observations. The Dawes criterion
"resolution limit in terms of separation in arcseconds = 116/ Aperture in
mm" (s=116/D_mm) is even used as resolution limit in the technical
specifications of telescopes giving this value an impression of precision. However,
it is obvious that there has to be for statistical reasons some spread around
this value derived as average from a dataset. I don't know here any indication
in this regard from Dawes himself but personal experience suggests a standard
deviation of ~14% around the mean value 116. This is also supported by an
empirical found lower limit for resolving doubles at about half the Rayleigh
criterion as observation reports have shown for rather bright pairs under
excellent conditions distinctive elongations allowing estimation of position
and separation with confidence down to this value. This corresponds very well
with the above mentioned standard deviation value. If we sample for example
observation reports for equal bright doubles up to +6mag with a separation of 1
arcsecond under reasonable fair conditions, then 2/3 of the apertures of all
positive reports would be in the range 100132mm, 95% in the range 84148mm and
99,5% in the range 68164mm.
So far for equal bright doubles up to +6mag. Even if
we know that magnitude significantly different from +6mag plays a significant
role for resolution we might take this as base for ideas regarding resolution
of unequal doubles. The instant idea popping up here is that the difference in
magnitude between primary and secondary = delta_m might be the most important
factor besides separation. This suggests a simple modification of the Dawes
criterion in the form of s=116/D_mm*delta_m as proposed by Bruce and Fred on
the Cloudy Nights Double Star Observing Forum this would mean a linear
increase of separation and therefore aperture by delta_m. For equal bright doubles
(with delta_m less 1) we fall back to Dawes by setting delta_m to 1. To
determine the proposed aperture for resolving a given unequal double we can
change this formula to pD_mm=116/s*delta_m. A few tests with selected unequal
doubles show quickly that this simple approach might work well for bright and
not too unequal doubles, however, this is overall a rather poor performer as it
ignores the magnitudes of the components of the double and proposing the same
aperture for resolving a +5/7mag pair as well as a +9/11mag pair is obviously
nonsense. Besides the assumed linear relationship between separation and
delta_m is obviously an oversimplification.
This means we have to look for more advanced models
at least three come to my mind: The "Fuzzy Difficulty Index" (http://www.carbonar.es/s33/Fuzzy-splitting/fuzzy-splitting.html) and
the attempts from Chris Lord (http://www.brayebrookobservatory.org/BrayObsWebSite/BOOKS/TELESCOPIC%20RESOLUTION.pdf) and
Napier-Munn (www.jdso.org/volume4/number4/Napier_Munn.pdf).
All these models have several shortcomings for different
reasons:
-
The "fuzzy" approach basically ignores the
question of aperture and is thus of no use to determine the for resolution
required aperture
-
Lord's model structure is also not of this good use
when asking for a proposed aperture for a given double as he uses aperture
classes making this question a recursive one. But the main weakness of his
model is the fact that he also ignores the effect of fainter magnitudes similar
to the above described simple RoT approach. He even argues that this has little
effect even if the contrary is obvious. Else his work is very interesting and
shows a deep knowledge of diffraction theory
-
The Napier-Munn model is based on statistical analysis
of several hundred observations and the concept of working with probabilities
for resolution with a given aperture seems very interesting to me especially in
the form of asking for the aperture with a 50% probability for resolution. But
the algorithm based on his model structure has an obvious numerical problem,
delivering for many test pairs an error as the proposed aperture for a 50%
resolution probability would be less than zero, which is obviously nonsense
-
Lord and Napier-Munn work both with a data set of
observations but all of them are to my knowledge with fixed apertures so only
by chance might several observations really on the limit regarding aperture but
most of them are not. So in my opinion the used data sets are in both cases not
up to the intended task and especially the data set used by Lord seems outdated
and thus of no good use for serious statistical analysis.
What despite these shortcomings these models show
nicely for me is that there is no such thing as a limit in terms of required
aperture as one precise number but an aperture range with a probability
distribution for reasons of simplicity assumed symmetrical although it is
evident that there is more room in needed aperture up than down. In reality,
there is no such thing as an upper limit as conditions may be this bad, that no
amount of aperture may be of help - problems with seeing might even work the
other way around giving an advantage to the smaller aperture.
Observation reports for selected pairs from different
observers (see for example the Sissy Haas project) show there is with the
exception of the very easy doubles usually a wide range of overlapping
apertures with positive and negative reports reflecting different circumstances
when observing. And my own observation logs show such a range also for only one
and same observer mostly depenting on differences in seeing conditions.
As the number of influencing factors seems vast any
attempt for a complete analytic model seens futile (as Schaefers attempt for a
complete analytical model for the telescope magnitude resolution limit has
shown) - so in my opinion a statistical approach combining some basic
theoretical optical concepts with numerical curve adapting might be the best
approach.
Basic question remains, which factors to include in
such a model - this depends certainly on the intended use of such a model and
the informations one can usually expect from observation reports.
For session planning, the in advance known parameters
are
-
the
data for the double
o separation
o magnitudes of both components
o delta_m
-
the data of the scope available
o aperture
o size of CO
-
average light pollution in the given location
-
average extinction for selected field of view.
Next step is then the sampling of as many observation
reports of double star resolutions with smallest possible aperture for the
given conditions. This might be some with fixed apertures but obviously at the
"limit" but mostly such done with variable aperture (with the help of
aperture masks or iris diaphragms) to be for sure on the "limit". In
addition, obviously we are not asking for the one and only "true
limit" observation done under perfect conditions but for the many
different results reflecting the very stochastic behavior of photons. Meanwhile
I have a data set of several hundred observations of this kind available but
only up to 200mm aperture. This might then be large enough to cover the usual
amateur range for double observing. Larger apertures would be at reasonable
costs only available as reflectors making the use of aperture masks difficult
as then the central obstuction gets quickly too large to allow reasonable
double star observing.
Next step is then deriving a model based on some
knowledge in optical theory and calculating the parameters of this model with
the help of nonlinear regression analysis, looking at the results and adapting
the model step by step to a result with a reasonable small standard deviation
and a correlation coefficient near 1 as quality parameters. With an earlier
(means smaller) version of the current data set I have done this with the
following result:
pD_mm = proposed aperture diameter in mm with a resolution
probability of 50% (telescope Strehl 0.95 or better, reasonable good seeing,
reasonable good transparency, average personal acuity assumed) and a standard
deviation of 14%.
The structure for the model is an addition of
submodels in the form of base+f(delta_m)+f(M1)+f(M2)+f(NEML) followed by a
TML-check with submodels as follows:
Base = Dawes criterion 116/s as base modified
depending on size of central obstruction reducing size of Airy disk
f(delta_m) = function of delta magnitudes in relation
to separation including negative influence of increasing CO. Current version of
f(delta_m): pr1*delta_m/sep^pr2*(1+CO)^pr3
if delta_m > 1 else zero with pr1, pr2 and pr3 being parameters to be
determined by nonlinear regression analysis
f(M1) = function of the magnitude of the primary
depending on separation. Current version f(M1): pr4*(M1+pr5)/(sep^pr6) if M1
> 6 assuming that this function has to work only for primaries fainter than
+6mag else zero. To makes things a bit complicated there is the need to counterbalance
the exponential effect of smaller separations so there is an decrease of this
function necessary if the separation is smaller than the difference between the
constant 14 minus M1. I have no good explanation for this value 14 but it is
well supported by the existing data - and pr4, pr5 and pr6 are again parameters
to be determined by nonlinear regression analysis. Just another switch is
necessary to stop this subfunction at M1 = +12.5mag as I see no need to go
below this value
f(M2) = function of the magnitude of the secondary:
pr7*(M2-9) if M2 > 9 assuming that this function has to work only for
secondaries fainter than +9mag. This assumption is backed by the existing data
and pr7 is again a parameter to be determined by nonlinear regression analysis
f(NEML) = function of Naked Eye Magnitude Limit for a
given location including extinction in the field of view (and not zenith as
usually used): pr8*(6.5-NEML) with the assumption that this has to work only
for doubles with secondaries fainter than +9mag and pr8 is again a parameter to
be determined by nonlinear regression analysis. The constant 6.5 assumes that
the perfect sky offers a NEML of +6.5mag.
TML-check: After calculating pD_mm comes a check if
the proposed aperture diameter is large enough to resolve the secondary as
single star depending on TML if not an accordingly larger value is calculated
giving then pD_mm' = proposed aperture diameter after check against the for
light pollution and extinction adapted telescope magnitude limit. This is done
for all doubles in an approximation
process but is usually only relevant for
doubles with companions fainter than +10mag. This is then the final value for
the proposed aperture diameter pD_mm' if larger than pD_mm of the first step.
The current implementation is very optimistic it seems so I work here mostly
with NEML 2.5 to get a bit more realistic results but even then these are often
still rather too optimistic. This is probably also due to the fact that I have
to calculate from the given aperture for the given NEML but reality shows
that there is a wide band of TML variation of up to 1.2mag under seemingly
ident conditions.
Required input for the RoT calculation therefore: Size
central obstruction for scope planned to use, naked eye magnitude limit for
specific location and average extinction for given altitude, double star
separation, magnitude of primary, magnitude of secondary.
Intentional limitations: Primary up to +12.5mag,
secondary up to 14mag and size of CO up to 0.4 these values are consideren to
be on the upper limit to be of use for amateur astronomers.
Other known limitations: Several specific conditions
like multiples with close faint companions in between brighter ones or very
bright primaries with glare, humidity in the air giving halos around brighter
primaries or doubles against bright nebulas, secondaries in the red color
spectrum etc. are not covered by the algorithm but only by the probability
concept - means bad luck.
Known weaknesses: The final TML check is currently too
optimistic meaning pD_mm' is often calculated too small for a realistic 50%
chance for resolution of very faint companions. Also the influence of CO is
currently implemented not precise enough. Experiments have shown, that a small
amount of CO of ~0.175 might be a peak value for positive effects and ~0.25
might be the in average the value where the negative effects begin slowly to
evolve with a serious visual impact beginning with 0.35.
The current version of the RoT model is available for
download as spreadsheet WRAKs
RoT.xls.
2021-12-12/Wilfried
Knapp