Type: | Package |
Title: | Flexible, Extensible, & Reproducible Pupillometry Preprocessing |
Version: | 3.0.0 |
Date: | 2025-09-16 |
Language: | en-US |
Description: | Pupillometry offers a non-invasive window into the mind and has been used extensively as a psychophysiological readout of arousal signals linked with cognitive processes like attention, stress, and emotional states [Clewett et al. (2020) <doi:10.1038/s41467-020-17851-9>; Kret & Sjak-Shie (2018) <doi:10.3758/s13428-018-1075-y>; Strauch (2024) <doi:10.1016/j.tins.2024.06.002>]. Yet, despite decades of pupillometry research, many established packages and workflows to date lack design patterns based on Findability, Accessibility, Interoperability, and Reusability (FAIR) principles [see Wilkinson et al. (2016) <doi:10.1038/sdata.2016.18>]. 'eyeris' provides a modular, performant, and extensible preprocessing framework for pupillometry data with BIDS-like organization and interactive output reports [Esteban et al. (2019) <doi:10.1038/s41592-018-0235-4>; Gorgolewski et al. (2016) <doi:10.1038/sdata.2016.44>]. Development was supported, in part, by the Stanford Wu Tsai Human Performance Alliance, Stanford Ric Weiland Graduate Fellowship, Stanford Center for Mind, Brain, Computation and Technology, NIH National Institute on Aging Grants (R01-AG065255, R01-AG079345), NSF GRFP (DGE-2146755), McKnight Brain Research Foundation Clinical Translational Research Scholarship in Cognitive Aging and Age-Related Memory Loss, American Brain Foundation, and the American Academy of Neurology. |
Encoding: | UTF-8 |
Depends: | R (≥ 4.1) |
Imports: | eyelinker, dplyr, gsignal, purrr, zoo, cli, rlang, stringr, utils, stats, graphics, grDevices, tidyr, progress, data.table, withr, lifecycle, MASS, viridis, fields, jsonlite, rmarkdown, DBI, glue, base64enc, arrow |
RoxygenNote: | 7.3.3 |
Suggests: | duckdb, knitr, testthat (≥ 3.0.0), devtools |
VignetteBuilder: | knitr |
License: | MIT + file LICENSE |
Config/testthat/edition: | 3 |
URL: | https://shawnschwartz.com/eyeris/, https://github.com/shawntz/eyeris/ |
BugReports: | https://github.com/shawntz/eyeris/issues |
NeedsCompilation: | no |
Packaged: | 2025-09-16 23:14:51 UTC; shawn.schwartz |
Author: | Shawn Schwartz |
Maintainer: | Shawn Schwartz <shawn.t.schwartz@gmail.com> |
Repository: | CRAN |
Date/Publication: | 2025-09-17 07:50:02 UTC |
eyeris: Flexible, Extensible, & Reproducible Pupillometry Preprocessing
Description
Pupillometry offers a non-invasive window into the mind and has been used extensively as a psychophysiological readout of arousal signals linked with cognitive processes like attention, stress, and emotional states [Clewett et al. (2020) doi:10.1038/s41467-020-17851-9; Kret & Sjak-Shie (2018) doi:10.3758/s13428-018-1075-y; Strauch (2024) doi:10.1016/j.tins.2024.06.002]. Yet, despite decades of pupillometry research, many established packages and workflows to date lack design patterns based on Findability, Accessibility, Interoperability, and Reusability (FAIR) principles [see Wilkinson et al. (2016) doi:10.1038/sdata.2016.18]. 'eyeris' provides a modular, performant, and extensible preprocessing framework for pupillometry data with BIDS-like organization and interactive output reports [Esteban et al. (2019) doi:10.1038/s41592-018-0235-4; Gorgolewski et al. (2016) doi:10.1038/sdata.2016.44]. Development was supported, in part, by the Stanford Wu Tsai Human Performance Alliance, Stanford Ric Weiland Graduate Fellowship, Stanford Center for Mind, Brain, Computation and Technology, NIH National Institute on Aging Grants (R01-AG065255, R01-AG079345), NSF GRFP (DGE-2146755), McKnight Brain Research Foundation Clinical Translational Research Scholarship in Cognitive Aging and Age-Related Memory Loss, American Brain Foundation, and the American Academy of Neurology.
Author(s)
Maintainer: Shawn Schwartz shawn.t.schwartz@gmail.com (ORCID)
Other contributors:
Mingjian He [contributor]
Haopei Yang [contributor]
Alice Xue [contributor]
Gustavo Santiago-Reyes [contributor]
See Also
Useful links:
Report bugs at https://github.com/shawntz/eyeris/issues
Add unique event identifiers to handle duplicate event messages
Description
This function adds a new column text_unique
to each events table that
creates unique identifiers for each occurrence of the same event message
by appending a count number. This prevents events like "GOAL" from being
merged across all separate goals.
Usage
add_unique_event_identifiers(events_list)
Arguments
events_list |
A list of event data frames (one per block) |
Details
This function is called by the exposed wrapper load_asc()
Value
Updated events list with text_unique
column added to each
data frame
Add unique identifiers to a single events data frame
Description
This function is called by the exposed wrapper load_asc()
Usage
add_unique_identifiers_to_df(events_df)
Arguments
events_df |
A single events data frame |
Value
Updated events data frame with text_unique
column
Save out pupil time series data in a BIDS-like structure
Description
This method provides a structured way to save out pupil data in a BIDS-like structure. The method saves out epoched data as well as the raw pupil time series, and formats the directory and filename structures based on the metadata you provide.
Usage
bidsify(
eyeris,
save_all = TRUE,
epochs_list = NULL,
bids_dir = NULL,
participant_id = NULL,
session_num = NULL,
task_name = NULL,
run_num = NULL,
save_raw = TRUE,
html_report = TRUE,
report_seed = 0,
report_epoch_grouping_var_col = "matched_event",
verbose = TRUE,
csv_enabled = TRUE,
db_enabled = FALSE,
db_path = "my-project",
parallel_processing = FALSE,
merge_epochs = deprecated(),
merge_runs = deprecated(),
pdf_report = deprecated()
)
Arguments
eyeris |
An object of class |
save_all |
Logical flag indicating whether all epochs are to be saved
or only a subset of them. Defaults to |
epochs_list |
List of epochs to be saved. Defaults to |
bids_dir |
Base bids_directory. Defaults to |
participant_id |
BIDS subject ID. Defaults to |
session_num |
BIDS session ID. Defaults to |
task_name |
BIDS task ID. Defaults to |
run_num |
BIDS run ID. Optional override for the run number when there's
only one block of data present in a given |
save_raw |
Logical flag indicating whether to save_raw pupil data in
addition to epoched data. Defaults to |
html_report |
Logical flag indicating whether to save out the |
report_seed |
Random seed for the plots that will appear in the report
Defaults to |
report_epoch_grouping_var_col |
String name of grouping column to use
for epoch-by-epoch diagnostic plots in an interactive rendered HTML report.
Column name must exist (i.e., be a custom grouping variable name set within
the metadata template of your |
verbose |
A flag to indicate whether to print detailed logging messages.
Defaults to |
csv_enabled |
Logical flag indicating whether to write CSV output files.
Defaults to |
db_enabled |
Logical flag indicating whether to write data to a |
db_path |
Database filename or path. Defaults to |
parallel_processing |
Logical flag to manually enable parallel database
processing. When |
merge_epochs |
(Deprecated) This parameter is deprecated and will be ignored. All epochs are now saved as separate files following BIDS conventions. This parameter will be removed in a future version |
merge_runs |
(Deprecated) This parameter is deprecated and will be ignored. All runs are now saved as separate files following BIDS conventions. This parameter will be removed in a future version |
pdf_report |
(Deprecated) Use |
Details
In the future, we intend for this function to save out the data in an official BIDS format for eyetracking data (see the proposal currently under review here). At this time, however, this function instead takes a more BIDS-inspired approach to organizing the output files for preprocessed pupil data.
Value
Invisibly returns NULL
. Called for its side effects
See Also
Examples
# bleed around blink periods just long enough to remove majority of
# deflections due to eyelid movements
demo_data <- eyelink_asc_demo_dataset()
# example with unepoched data
demo_data |>
eyeris::glassbox() |>
eyeris::bidsify(
bids_dir = tempdir(), # <- MAKE SURE TO UPDATE TO YOUR DESIRED LOCAL PATH
participant_id = "001",
session_num = "01",
task_name = "assocret",
run_num = "01",
save_raw = TRUE, # save out raw time series
html_report = TRUE, # generate interactive report document
report_seed = 0 # make randomly selected plot epochs reproducible
)
# example with epoched data
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1), # grab 1 second prior to and 1 second post event
label = "prePostProbe" # custom epoch label name
) |>
eyeris::bidsify(
bids_dir = tempdir(), # <- MAKE SURE TO UPDATE TO YOUR DESIRED LOCAL PATH
participant_id = "001",
session_num = "01",
task_name = "assocret",
run_num = "01"
)
# example with run_num for single block data
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1),
label = "prePostProbe"
) |>
eyeris::bidsify(
bids_dir = tempdir(),
participant_id = "001",
session_num = "01",
task_name = "assocret",
run_num = "03" # override default run-01 (block_1) to use run-03 instead
)
# example with database storage enabled
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1),
label = "prePostProbe"
) |>
eyeris::bidsify(
bids_dir = tempdir(),
participant_id = "001",
session_num = "01",
task_name = "assocret",
db_enabled = TRUE, # enable eyerisdb database storage
db_path = "my-project" # custom project database name
)
# example for large-scale cloud compute (database only, no CSV files)
demo_data |>
eyeris::glassbox() |>
eyeris::bidsify(
bids_dir = tempdir(),
participant_id = "001",
session_num = "01",
task_name = "assocret",
csv_enabled = FALSE, # disable CSV files
db_enabled = TRUE # database storage only
)
Bin pupil time series by averaging within time bins
Description
This function bins pupillometry data by dividing time into equal intervals and averaging the data within each bin. Unlike downsampling, binning averages data points within each time bin.
Usage
bin(eyeris, bins_per_second, method = "mean", call_info = NULL)
Arguments
eyeris |
An object of class |
bins_per_second |
The number of bins to create per second of data |
method |
The binning method: "mean" (default) or "median" |
call_info |
A list of call information and parameters. If not provided,
it will be generated from the function call. Defaults to |
Details
Binning divides one second of pupillary data into X bins and averages pupillometry data around each bin center. The resulting time points will be: 1/2X, 3/2X, 5/2X, ..., etc. where X is the number of bins per second.
This approach is commonly used in pupillometry research to study temporal dynamics of pupil dilatory response; however, it should be used with caution (as averaging within bins can distort the pupillary dynamics).
Value
An eyeris
object with binned data and updated sampling rate
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
bin = list(...)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris glassbox preprocessing pipeline
downsample()
for downsampling functionality
Examples
demo_data <- eyelink_asc_demo_dataset()
# bin data into 10 bins per second using the (default) "mean" method
demo_data |>
eyeris::glassbox(bin = list(bins_per_second = 10, method = "mean")) |>
plot(seed = 0)
Bin pupil data into specified time bins
Description
This function bins pupil data into specified time bins using either mean or median aggregation. It creates evenly spaced bins across the time series and aggregates pupil values within each bin.
Usage
bin_pupil(x, prev_op, bins_per_second, method, current_fs)
Arguments
x |
A data frame containing the pupil time series data |
prev_op |
The name of the previous operation's output column |
bins_per_second |
Number of bins per second (positive integer) |
method |
Aggregation method: "mean" or "median" |
current_fs |
Current sampling rate in Hz |
Details
This function is called by the exposed wrapper bin()
.
Value
A data frame with binned pupil data containing columns:
-
time_secs
: Bin center timestamps -
pupil_binned_{method}_{bins_per_second}hz
: Binned pupil values
Calculate Euclidean distance between points
Description
Calculate Euclidean distance between points
Usage
calc_euclidean_dist(x1, y1, x2 = 0, y2 = 0)
Arguments
x1 |
First x coordinate or vector of x coordinates |
y1 |
First y coordinate or vector of y coordinates |
x2 |
Second x coordinate or vector of x coordinates (defaults to |
y2 |
Second y coordinate or vector of y coordinates (defaults to |
Value
A numeric vector of Euclidean distances
Calculate confounds for epoched data
Description
Helper function to calculate confounds for epoched time series data.
This function is used internally by both summarize_confounds()
and epoch()
.
Usage
calculate_epoched_confounds(eyeris, epoch_names, hz, verbose = TRUE)
Arguments
eyeris |
An object of class |
epoch_names |
A vector of epoch names to process |
hz |
The sampling rate |
verbose |
A flag to indicate whether to print progress messages |
Value
An updated eyeris
object with epoched confounds
Check and create directory if it doesn't exist
Description
Checks if a directory exists and creates it if it doesn't. Provides informative messages about the process.
Usage
check_and_create_dir(basedir, dir = NULL, verbose = TRUE)
Arguments
basedir |
The base directory path |
dir |
The subdirectory to create (optional) |
verbose |
Whether to display status messages |
Value
No return value; creates directory if needed
Check baseline and epoch counts match
Description
Validates that the number of baseline epochs matches the number of epochs.
Usage
check_baseline_epoch_counts(epochs, baselines)
Arguments
epochs |
A list of epoch data |
baselines |
A list of baseline data |
Value
No return value; throws error if counts don't match
Check baseline input arguments
Description
Validates that baseline inputs are properly specified.
Usage
check_baseline_inputs(events, limits)
Arguments
events |
Event messages for baseline extraction |
limits |
Time limits for baseline extraction |
Value
No return value; throws error if inputs are invalid
Check if baseline mean is zero
Description
Validates that baseline mean is not zero for divisive baseline correction.
Usage
check_baseline_mean(x)
Arguments
x |
The baseline mean value to check |
Value
No return value; throws error if baseline mean is zero
Check if column exists in data frame
Description
Validates that a specified column exists in a data frame.
Usage
check_column(df, col_name)
Arguments
df |
The data frame to check |
col_name |
The column name to look for |
Value
No return value; throws error if column doesn't exist
Check if object is of class eyeris
Description
Validates that an object is of class eyeris
.
Usage
check_data(eyeris, fun)
Arguments
eyeris |
The |
fun |
The function name for error message |
Value
No return value; throws error if object is not eyeris
class
Check for DuckDB availability
Description
This internal helper checks whether the duckdb package is installed. If it is not available, a status message is displayed with platform-specific installation instructions (macOS, Linux, Windows). Functions that depend on DuckDB call this check before proceeding.
Usage
check_duckdb()
Value
TRUE
if duckdb is installed, otherwise FALSE
(with an
informative status message).
Check epoch input for plotting
Description
Validates that exactly one epoch is specified for plotting.
Usage
check_epoch_input(epochs)
Arguments
epochs |
A list of epoch data |
Value
No return value; throws error if more than one epoch is specified
Check epoch manual input data structure
Description
Validates that the events argument is a list of two data frames.
Usage
check_epoch_manual_input_data(ts_list)
Arguments
ts_list |
A list containing both start and end timestamp data frames |
Value
No return value; throws error if structure is invalid
Check epoch manual input data frame format
Description
Validates that start and end timestamp data frames have required columns.
Usage
check_epoch_manual_input_dfs(ts_list)
Arguments
ts_list |
A list containing start and end timestamp data frames |
Value
No return value; throws error if format is invalid
Check epoch message values against available events
Description
Validates that specified event messages exist in the eyeris
object.
Usage
check_epoch_msg_values(eyeris, events)
Arguments
eyeris |
The |
events |
A data frame containing event messages to validate |
Value
No return value; throws error if invalid messages are found
Check if input argument is provided
Description
Validates that a required argument is not NULL and throws an error if missing.
Usage
check_input(arg)
Arguments
arg |
The argument to check |
Value
No return value; throws error if argument is NULL
Check limits in wildcard mode
Description
Validates that limits are provided when using wildcard mode.
Usage
check_limits(limits)
Arguments
limits |
Time limits for epoch extraction |
Value
No return value; throws error if limits are missing in wildcard mode
Check if pupil_raw column exists
Description
Validates that the pupil_raw column exists in the eyeris
object.
Usage
check_pupil_cols(eyeris, fun)
Arguments
eyeris |
The |
fun |
The function name for error message |
Value
No return value; throws error if pupil_raw column is missing
Check start and end timestamps are balanced
Description
Validates that start and end timestamp data frames have the same number of rows.
Usage
check_start_end_timestamps(start, end)
Arguments
start |
The start timestamp data frame |
end |
The end timestamp data frame |
Value
No return value; throws error if timestamps are unbalanced
Check time series monotonicity
Description
Validates that a time vector is monotonically increasing.
Usage
check_time_monotonic(time_vector, time_col_name = "time_secs")
Arguments
time_vector |
The time vector to check |
time_col_name |
The name of the time column for error messages |
Value
No return value; throws error if time series is not monotonic
Clean string by removing non-alphanumeric characters
Description
Removes all non-alphanumeric and non-whitespace characters from a string.
Usage
clean_string(str)
Arguments
str |
The string to clean |
Value
A cleaned string with only alphanumeric characters and spaces
Clean up individual image files in run directories after HTML generation
Description
Removes PNG and JPG files from the root of source/figures/run-xx directories after all HTML reports have been generated. This cleans up loose image files that may have been created during report generation.
Usage
cleanup_run_dir_images(report_path, eye_suffix = NULL, verbose = FALSE)
Arguments
report_path |
Path to the report directory containing source/figures/ |
eye_suffix |
Optional eye suffix for binocular data |
verbose |
Whether to print verbose output |
Value
Invisibly returns TRUE if cleanup was successful, FALSE otherwise
Clean up source figures after report generation
Description
Removes the entire source/figures directory after the main HTML report has been generated since all images are now embedded in the HTML as data URLs.
Usage
cleanup_source_figures_post_render(
report_path,
eye_suffix = NULL,
verbose = FALSE
)
Arguments
report_path |
Path to the report directory |
eye_suffix |
Optional eye suffix for binocular data (unused but kept for compatibility) |
verbose |
Whether to print verbose output |
Value
Invisibly returns TRUE if cleanup was successful, FALSE otherwise
Cleanup temporary database
Description
Safely disconnects and removes temporary database files after successful merge.
Usage
cleanup_temp_database(temp_db_info, verbose = FALSE)
Arguments
temp_db_info |
List containing temp database connection and paths |
verbose |
Whether to print verbose output |
Value
Logical indicating success
Compute baseline correction for epoch data
Description
Applies baseline correction to epoch data using either subtractive or divisive methods.
Usage
compute_baseline(
x,
epochs,
baseline_epochs,
mode,
epoch_events = NULL,
baseline_events = NULL
)
Arguments
x |
An |
epochs |
A list of epoch data frames |
baseline_epochs |
A list of baseline epoch data frames |
mode |
The baseline correction mode ("sub" for subtractive, "div" for divisive) |
epoch_events |
Event messages for epochs (optional) |
baseline_events |
Event messages for baselines (optional) |
Value
A list containing baseline correction results and metadata
Create or connect to eyeris project database
Description
Creates a new DuckDB
database for the eyeris
project or connects to an existing one.
The database will be created in the BIDS derivatives directory. When parallel processing
is enabled, creates temporary databases to avoid concurrency issues.
Usage
connect_eyeris_database(
bids_dir,
db_path = "my-project",
verbose = FALSE,
parallel = FALSE
)
Arguments
bids_dir |
Path to the BIDS directory containing derivatives |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb") |
verbose |
Whether to print verbose output |
parallel |
Whether to enable parallel processing with temporary databases |
Value
DBI database connection object or temp database info list (when parallel=TRUE)
Convert nested data.table objects to tibbles
Description
Recursively converts data.table objects within nested lists to tibbles.
Usage
convert_nested_dt(nested_dt)
Arguments
nested_dt |
A nested list containing data.table objects |
Value
A nested list with data.table objects converted to tibbles
Count epochs and validate data is epoched
Description
Counts the number of epochs and validates that data has been epoched.
Usage
count_epochs(epochs)
Arguments
epochs |
A list of epoch data |
Value
No return value; throws error if no epochs found
Create a counter progress bar
Description
Creates a simple counter progress bar that shows current/total progress.
Usage
counter_bar(total, msg = "Progress", width = 80)
Arguments
total |
The total number of items to process |
msg |
The message to display before the counter |
width |
The width of the progress bar in characters |
Value
A progress bar object from the progress package
Create zip file from epoch images
Description
Creates a zip file containing epoch images instead of saving individual files. This function collects all the image data in memory and then creates a single zip file, which can be more efficient for the HTML gallery display.
Usage
create_epoch_images_zip(
epochs_to_save,
epoch_index,
block_name,
run_dir_num,
epochs_out,
pupil_steps,
eyeris_object,
eye_suffix = NULL,
report_epoch_grouping_var_col = "matched_event",
verbose = FALSE
)
Arguments
epochs_to_save |
List of epoch data to save |
epoch_index |
Index of the current epoch being processed |
block_name |
Name of the current block being processed |
run_dir_num |
Run directory number |
epochs_out |
Output directory for the epoch files |
pupil_steps |
Vector of pupil processing steps |
eyeris_object |
The full |
eye_suffix |
Optional eye suffix for binocular data |
report_epoch_grouping_var_col |
Column name for grouping epochs |
verbose |
Whether to print verbose output |
Value
Path to the created zip file
Create table name for eyeris data
Description
Generates a standardized table name for eyeris
data based on the data type
and subject information.
Usage
create_table_name(
data_type,
sub,
ses,
task,
run = NULL,
eye_suffix = NULL,
epoch_label = NULL
)
Arguments
data_type |
Type of data ("timeseries", "epochs", "epoch_timeseries", "epoch_summary", "events", "blinks") |
sub |
Subject ID |
ses |
Session ID |
task |
Task name |
run |
Run number |
eye_suffix |
Optional eye suffix for binocular data |
Value
Character string with table name
Create temporary database for parallel processing
Description
Creates a unique temporary database for use in parallel jobs to avoid concurrency issues. The temporary database is named using process ID and timestamp to ensure uniqueness.
Usage
create_temp_eyeris_database(
bids_dir,
base_db_path = "my-project",
verbose = FALSE
)
Arguments
bids_dir |
Path to the BIDS directory containing derivatives |
base_db_path |
Base database name (e.g., "my-project") |
verbose |
Whether to print verbose output |
Value
List containing database connection and temp database path
NA-pad blink events / missing data
Description
Deblinking (a.k.a. NA-padding) of time series data. The intended use of this method is to remove blink-related artifacts surrounding periods of missing data. For instance, when an individual blinks, there are usually rapid decreases followed by increases in pupil size, with a chunk of data missing in-between these 'spike'-looking events. The deblinking procedure here will NA-pad each missing data point by your specified number of ms.
Usage
deblink(eyeris, extend = 50, call_info = NULL)
Arguments
eyeris |
An object of class |
extend |
Either a single number indicating the number of milliseconds to
pad forward/backward around each missing sample, or, a vector of length two
indicating different numbers of milliseconds pad forward/backward around each
missing sample, in the format |
call_info |
A list of call information and parameters. If not provided, it will be generated from the function call |
Details
This function is automatically called by glassbox()
by default. If needed,
customize the parameters for deblink
by providing a parameter list. Use
glassbox(deblink = FALSE)
to disable this step as needed.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
Value
An eyeris
object with a new column: pupil_raw_{...}_deblink
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
deblink = list(...)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris glassbox preprocessing pipeline
Examples
demo_data <- eyelink_asc_demo_dataset()
# 50 ms in both directions (the default)
demo_data |>
eyeris::glassbox(deblink = list(extend = 50)) |>
plot(seed = 0)
# 40 ms backward, 50 ms forward
demo_data |>
# set deblink to FALSE (instead of a list of params)
# to skip step (not recommended)
eyeris::glassbox(deblink = list(extend = c(40, 50))) |>
plot(seed = 0)
Internal function to remove blink artifacts from pupil data
Description
This function implements blink artifact removal by extending the duration of detected blinks (missing samples) by a specified number of milliseconds both forward and backward in time. This helps to remove deflections in pupil size that occur due to eyelid movements during and around actual blink periods.
This function is called by the exposed wrapper deblink()
.
Usage
deblink_pupil(x, prev_op, extend)
Arguments
x |
A data frame containing pupil data with columns |
prev_op |
The name of the previous operation's pupil column |
extend |
Either a single number indicating symmetric padding in both
directions, or a vector of length 2 indicating asymmetric padding in the
format |
Details
The function works by:
Identifying missing samples as blink periods
Extending these periods by the specified number of milliseconds
Setting all samples within the extended blink periods to NA
Preserving all other samples unchanged
This implementation is based on the approach described in the pupillometry package by dr-JT (https://github.com/dr-JT/pupillometry/blob/main/R/pupil_deblink.R).
Value
A numeric vector of the same length as the input data with blink artifacts removed (set to NA)
Remove pupil samples that are physiologically unlikely
Description
The intended use of this method is for removing pupil samples that emerge
more quickly than would be physiologically expected. This is accomplished by
rejecting samples that exceed a "speed"-based threshold (i.e., median
absolute deviation from sample-to-sample). This threshold is computed based
on the constant n
, which defaults to the value 16
.
Usage
detransient(eyeris, n = 16, mad_thresh = NULL, call_info = NULL)
Arguments
eyeris |
An object of class |
n |
A constant used to compute the median absolute deviation (MAD)
threshold. Defaults to |
mad_thresh |
Default
then, with the default multiplier
In this situation, any speed
|
call_info |
A list of call information and parameters. If not provided,
it will be generated from the function call. Defaults to |
Details
This function is automatically called by glassbox()
by default. If needed,
customize the parameters for detransient
by providing a parameter list. Use
glassbox(detransient = FALSE)
to disable this step as needed.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
Computed properties:
-
pupil_speed
: Compute speed of pupil by approximating the derivative ofx
(pupil) with respect toy
(time) using finite differences.Let
x = (x_1, x_2, \dots, x_n)
andy = (y_1, y_2, \dots, y_n)
be two numeric vectors withn \ge 2
; then, the finite differences are computed as:\delta_i = \frac{x_{i+1} - x_i}{y_{i+1} - y_i}, \quad i = 1, 2, \dots, n-1.
This produces an output vector
p = (p_1, p_2, \dots, p_n)
defined by:For the first element:
p_1 = |\delta_1|,
For the last element:
p_n = |\delta_{n-1}|,
For the intermediate elements (
i = 2, 3, \dots, n-1
):p_i = \max\{|\delta_{i-1}|,\,|\delta_i|\}.
-
median_speed
: The median of the computedpupil_speed
:median\_speed = median(p)
-
mad_val
: The median absolute deviation (MAD) ofpupil_speed
from the median:mad\_val = median(|p - median\_speed|)
-
mad_thresh
: A threshold computed from the median speed and the MAD, using a constant multipliern
(default value: 16):mad\_thresh = median\_speed + (n \times mad\_val)
Value
An eyeris
object with a new column in time series
:
pupil_raw_{...}_detransient
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
detransient = list(...)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris
glassbox preprocessing pipeline.
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox(
detransient = list(n = 16) # set to FALSE to skip step (not recommended)
) |>
plot(seed = 0)
Internal function to remove transient artifacts from pupil data
Description
This function implements transient artifact removal by
identifying and removing samples that exceed a speed-based threshold.
The threshold is computed based on the constant n
, which defaults to
the value 16
.
This function is called by the exposed wrapper detransient()
.
Usage
detransient_pupil(x, prev_op, n, mad_thresh)
Arguments
x |
A data frame containing pupil data with columns |
prev_op |
The name of the previous operation's pupil column |
n |
The constant used to compute the median absolute deviation (MAD)
threshold. Defaults to |
mad_thresh |
The threshold used to identify transient artifacts.
Defaults to |
Details
The function works by:
Calculating the speed of pupil changes using finite differences
Identifying samples that exceed a speed-based threshold
Removing these samples from the pupil data
Value
A numeric vector of the same length as the input data with transient artifacts removed (set to NA)
Detrend the pupil time series
Description
Linearly detrend_pupil data by fitting a linear model of pupil_data ~ time
,
and return the fitted betas and the residuals (pupil_data - fitted_values
).
Usage
detrend(eyeris, call_info = NULL)
Arguments
eyeris |
An object of class |
call_info |
A list of call information and parameters. If not provided,
it will be generated from the function call. Defaults to |
Details
This function is automatically called by glassbox()
if detrend = TRUE
.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
Value
An eyeris
object with two new columns in time series
:
detrend_fitted_betas
, and pupil_raw_{...}_detrend
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Use glassbox(detrend = TRUE)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris
glassbox preprocessing pipeline
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox(detrend = TRUE) |> # set to FALSE to skip step (default)
plot(seed = 0)
Internal function to detrend pupil data
Description
This function detrends pupil data by fitting a linear model
of pupil_data ~ time
, and returning the fitted betas and the residuals
(pupil_data - fitted_values
).
This function is called by the exposed wrapper detrend()
.
Usage
detrend_pupil(x, prev_op)
Arguments
x |
A data frame containing pupil data with columns |
prev_op |
The name of the previous operation's pupil column |
Value
A list containing the fitted values, coefficients, and residuals
Disconnect from eyeris database
Description
Safely disconnects from the eyeris
project database.
Usage
disconnect_eyeris_database(con, verbose = FALSE)
Arguments
con |
Database connection object |
verbose |
Whether to print verbose output |
Value
Logical indicating success
Downsample pupil time series with anti-aliasing filtering
Description
This function downsamples pupillometry data by applying an anti-aliasing filter before decimation. Unlike binning, downsampling preserves the original temporal dynamics without averaging within bins.
Usage
downsample(
eyeris,
target_fs,
plot_freqz = FALSE,
rp = 1,
rs = 35,
call_info = NULL
)
Arguments
eyeris |
An object of class |
target_fs |
The target sampling frequency in Hz after downsampling. |
plot_freqz |
Boolean flag for displaying filter frequency response (default FALSE). |
rp |
Passband ripple in dB (default 1). |
rs |
Stopband attenuation in dB (default 35). |
call_info |
A list of call information and parameters. If not provided, it will be generated from the function call. |
Details
Downsampling reduces the sampling frequency by decimating data points.
The function automatically designs an anti-aliasing filter using the
lpfilt()
function with carefully chosen parameters:
-
ws
(stopband frequency) = Fs_new / 2 (Nyquist freq of new sampling rate) -
wp
(passband frequency) = ws - max(5, Fs_nq * 0.2) An error is raised if
wp < 4
to prevent loss of pupillary responses
The resulting time points will be: 0, 1/X, 2/X, 3/X, ..., etc. where X is the new sampling frequency.
Value
An eyeris
object with downsampled data and updated sampling rate.
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
downsample = list(...)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris
glassbox preprocessing pipeline.
bin()
for binning functionality.
Examples
demo_data <- eyelink_asc_demo_dataset()
# downsample pupil data recorded at 1000 Hz to 100 Hz with the default params
demo_data |>
eyeris::glassbox(downsample = list(target_fs = 100)) |>
plot(seed = 0)
Internal function to downsample pupil data
Description
This function downsamples pupil data by applying an anti-aliasing filter before decimation. Unlike binning, downsampling preserves the original temporal dynamics without averaging within bins.
This function is called by the exposed wrapper downsample()
.
Usage
downsample_pupil(x, prev_op, target_fs, plot_freqz, current_fs, rp, rs)
Arguments
x |
A data frame containing pupil data with columns |
prev_op |
The name of the previous operation's pupil column |
target_fs |
The target sampling frequency in Hz after downsampling |
plot_freqz |
A flag to indicate whether to display the filter frequency
response. Defaults to |
current_fs |
The current sampling frequency in Hz. Defaults to |
rp |
Passband ripple in dB. Defaults to |
rs |
Stopband attenuation in dB. Defaults to |
Value
A list containing the downsampled data and the decimated sample rate
Draw vertical lines at NA positions
Description
Adds vertical dashed lines at positions where y values are NA.
Usage
draw_na_lines(x, y, ...)
Arguments
x |
The x-axis values |
y |
The y-axis values |
... |
Additional arguments passed to abline() |
Value
No return value; adds lines to the current plot
Draw random epochs for plotting
Description
Generates random time segments from the time series data for preview plotting.
Usage
draw_random_epochs(x, n, d, hz)
Arguments
x |
A data frame containing time series data |
n |
Number of random epochs to draw |
d |
Duration of each epoch in seconds |
hz |
Sampling rate in Hz |
Value
A list of data frames, each containing a random epoch segment
Epoch (and baseline) pupil data based on custom event message structure
Description
Intended to be used as the final preprocessing step. This function creates
data epochs of either fixed or dynamic durations with respect to provided
events
and time limits
, and also includes an intuitive metadata parsing
feature where additional trial data embedded within event messages can easily
be identified and joined into the resulting epoched data frames.
Usage
epoch(
eyeris,
events,
limits = NULL,
label = NULL,
baseline = FALSE,
baseline_type = c("sub", "div"),
baseline_events = NULL,
baseline_period = NULL,
hz = NULL,
verbose = TRUE,
call_info = NULL,
calc_baseline = deprecated(),
apply_baseline = deprecated()
)
Arguments
eyeris |
An object of class |
events |
Either (1) a single string representing the event message to
perform trial extraction around, using specified list( data.frame(time = c(...), msg = c(...)), # start events data.frame(time = c(...), msg = c(...)), # end events 1 # block number ) where the first data.frame indicates the For event-modes |
limits |
A vector of 2 values (start, end) in seconds, indicating where
trial extraction should occur centered around any given |
label |
An (optional) string you can provide to customize the name of
the resulting |
baseline |
(New) A single parameter that controls baseline
correction. Set to |
baseline_type |
Whether to perform subtractive ( |
baseline_events |
Similar to |
baseline_period |
A vector of 2 values (start, end) in seconds,
indicating the window of data that will be used to perform the baseline
correction, which will be centered around the single string "start" message
string provided in |
hz |
Data sampling rate. If not specified, will use the value contained within the tracker's metadata |
verbose |
A flag to indicate whether to print detailed logging messages
Defaults to |
call_info |
A list of call information and parameters. If not provided, it will be generated from the function call |
calc_baseline |
(Deprecated) Use |
apply_baseline |
(Deprecated) Use |
Value
An eyeris
object with a new nested list of data frames: $epoch_*
.
The epochs are organized hierarchically by block and preprocessing step.
Each epoch contains the pupil time series data for the specified time window
around each event message, along with metadata about the event.
When using bidsify()
to export the data, filenames will include both
epoch and baseline event information for clarity.
See Also
Examples
demo_data <- eyelink_asc_demo_dataset()
eye_preproc <- eyeris::glassbox(demo_data)
# example 1: select 1 second before/after matched event message "PROBE*"
eye_preproc |>
eyeris::epoch(events = "PROBE*", limits = c(-1, 1))
# example 2: select all samples between each trial
eye_preproc |>
eyeris::epoch(events = "TRIALID {trial}")
# example 3: grab the 1 second following probe onset
eye_preproc |>
eyeris::epoch(
events = "PROBE_START_{trial}",
limits = c(0, 1)
)
# example 4: 1 second prior to and 1 second after probe onset
eye_preproc |>
eyeris::epoch(
events = "PROBE_START_{trial}",
limits = c(-1, 1),
label = "prePostProbe" # custom epoch label name
)
# example 5: manual start/end event pairs
# note: here, the `msg` column of each data frame is optional
eye_preproc |>
eyeris::epoch(
events = list(
data.frame(time = c(11334491), msg = c("TRIALID 22")), # start events
data.frame(time = c(11337158), msg = c("RESPONSE_22")), # end events
1 # block number
),
label = "example5"
)
# example 6: manual start/end event pairs
# note: set `msg` to NA if you only want to pass in start/end timestamps
eye_preproc |>
eyeris::epoch(
events = list(
data.frame(time = c(11334491), msg = NA), # start events
data.frame(time = c(11337158), msg = NA), # end events
1 # block number
),
label = "example6"
)
## examples with baseline arguments enabled
# example 7: use mean of 1-s preceding "PROBE_START" (i.e. "DELAY_STOP")
# to perform subtractive baselining of the 1-s PROBE epochs.
eye_preproc |>
eyeris::epoch(
events = "PROBE_START_{trial}",
limits = c(0, 1), # grab 0 seconds prior to and 1 second post PROBE event
label = "prePostProbe", # custom epoch label name
baseline = TRUE, # calculate and apply baseline correction
baseline_type = "sub", # "sub"tractive baseline calculation is default
baseline_events = "DELAY_STOP_*",
baseline_period = c(-1, 0)
)
# example 8: use mean of time period between set start/end event messages
# (i.e. between "DELAY_START" and "DELAY_STOP"). In this case, the
# `baseline_period` argument will be ignored since both a "start" and "end"
# message string are provided to the `baseline_events` argument.
eye_preproc |>
eyeris::epoch(
events = "PROBE_START_{trial}",
limits = c(0, 1), # grab 0 seconds prior to and 1 second post PROBE event
label = "prePostProbe", # custom epoch label name
baseline = TRUE, # calculate and apply baseline correction
baseline_type = "sub", # "sub"tractive baseline calculation is default
baseline_events = c(
"DELAY_START_*",
"DELAY_STOP_*"
)
)
# example 9: additional (potentially helpful) example
start_events <- data.frame(
time = c(11334491, 11338691),
msg = c("TRIALID 22", "TRIALID 23")
)
end_events <- data.frame(
time = c(11337158, 11341292),
msg = c("RESPONSE_22", "RESPONSE_23")
)
block_number <- 1
eye_preproc |>
eyeris::epoch(
events = list(start_events, end_events, block_number),
label = "example9"
)
Block-by-block epoch and baseline handler
Description
This function processes a single block of pupil data to extract epochs and optionally compute and apply baseline corrections. It handles the core epoching and baselining logic for a single block of data.
Usage
epoch_and_baseline_block(
x,
blk,
lab,
evs,
lims,
msg_s,
msg_e,
c_bline,
a_bline,
bline_type,
bline_evs,
bline_per,
hz,
verbose
)
Arguments
x |
An object of class |
blk |
A list containing block metadata |
lab |
Label for the epoch output |
evs |
Events specification for epoching (character vector or list) |
lims |
Time limits for epochs (numeric vector) |
msg_s |
Start message string |
msg_e |
End message string |
c_bline |
Logical indicating whether to calculate baseline |
a_bline |
Logical indicating whether to apply baseline correction |
bline_type |
Type of baseline correction ("sub" or "div") |
bline_evs |
Events specification for baseline calculation |
bline_per |
Baseline period specification |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal epoch_pupil()
function.
Value
A list containing epoch and baseline results
Manually epoch using provided start/end data frames of timestamps
Description
This function manually epochs data using provided start/end data frames of timestamps.
Usage
epoch_manually(eyeris, ts_list, hz, verbose)
Arguments
eyeris |
An object of class |
ts_list |
A list containing start/end data frames of timestamps |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal process_epoch_and_baselines()
function.
Value
A list containing epoch results
Epoch based on a single event message (without explicit limits)
Description
This function epochs data based on a single event message (i.e., without explicit limits).
Usage
epoch_only_start_msg(eyeris, start, hz, verbose)
Arguments
eyeris |
An object of class |
start |
A data frame containing the start timestamps |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal epoch_only_start_msg()
function.
Value
A list containing epoch results
Main epoching and baselining logic
Description
This function handles the core epoching and baselining operations for pupil data. It processes time series data to extract epochs based on specified events and optionally computes and applies baseline corrections.
Usage
epoch_pupil(
x,
prev_op,
evs,
lims,
label,
c_bline,
a_bline,
bline_type = c("sub", "div"),
bline_evs,
bline_per,
hz,
verbose
)
Arguments
x |
An object of class |
prev_op |
The name of the previous operation's output column |
evs |
Events specification for epoching (character vector or list) |
lims |
Time limits for epochs (numeric vector) |
label |
Label for the epoch output |
c_bline |
Logical indicating whether to calculate baseline |
a_bline |
Logical indicating whether to apply baseline correction |
bline_type |
Type of baseline correction ("sub" or "div") |
bline_evs |
Events specification for baseline calculation |
bline_per |
Baseline period specification |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the exposed wrapper epoch()
.
Value
A list containing epoch and baseline results
Epoch using a start and an end message (explicit timestamps)
Description
This function epochs data using a start and an end message (i.e., explicit timestamps).
Usage
epoch_start_end_msg(eyeris, start, end, hz, verbose)
Arguments
eyeris |
An object of class |
start |
A data frame containing the start timestamps |
end |
A data frame containing the end timestamps |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal epoch_start_end_msg()
function.
Value
A list containing epoch results
Epoch using a start message with fixed limits around it
Description
This function epochs data using a start message with fixed limits around it.
Usage
epoch_start_msg_and_limits(eyeris, start, lims, hz, verbose)
Arguments
eyeris |
An object of class |
start |
A data frame containing the start timestamps |
lims |
Time limits for epochs (numeric vector) |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal epoch_start_msg_and_limits()
function.
Value
A list containing epoch results
Handle errors with custom error classes
Description
A utility function to handle errors with specific error classes and provide appropriate error messages using the cli package.
Usage
error_handler(e, e_class)
Arguments
e |
The error object to handle |
e_class |
The expected error class to check against |
Value
No return value; either displays an error message via cli or stops execution with the original error
Evaluate pipeline step parameters
Description
Converts pipeline step parameters to logical values for evaluation.
Usage
evaluate_pipeline_step_params(params)
Arguments
params |
A list of pipeline step parameters |
Value
A logical vector indicating which steps should be executed
Export confounds data to CSV files and/or database
Description
Exports each block's confounds data to a separate CSV file and/or database table. Each file will contain all pupil steps (e.g., pupil_raw, pupil_clean) as rows, with confound metrics as columns.
Usage
export_confounds_to_csv(
confounds_list,
output_dir,
filename_prefix,
verbose,
run_num = NULL,
csv_enabled = TRUE,
db_con = NULL,
sub = NULL,
ses = NULL,
task = NULL,
eye_suffix = NULL,
epoch_label = NULL
)
Arguments
confounds_list |
A nested list structure containing confounds data |
output_dir |
The directory where CSV files will be saved |
filename_prefix |
Either a string prefix for filenames or a function that takes a block name and returns a prefix |
verbose |
A flag to indicate whether to print progress messages |
run_num |
The run number (if NULL, will be extracted from block names) |
csv_enabled |
Whether to write CSV files (default TRUE) |
db_con |
Database connection object (NULL if database disabled) |
sub |
Subject ID for database metadata |
ses |
Session ID for database metadata |
task |
Task name for database metadata |
eye_suffix |
Eye suffix for binocular data (e.g., "eye-L", "eye-R") |
epoch_label |
Epoch label for epoched data (added as column) |
Value
Invisibly returns a vector of created file paths
Extract baseline epochs from time series data
Description
Extracts baseline periods from time series data based on event messages and time ranges or start/end messages.
Usage
extract_baseline_epochs(x, df, evs, time_range, matched_epochs, hz)
Arguments
x |
An |
df |
The time series data frame |
evs |
Event messages for baseline extraction |
time_range |
Time range for baseline extraction |
matched_epochs |
Matched epoch start/end times |
hz |
Sampling rate in Hz |
Value
A list of baseline epoch data frames
Extract event identifiers from event messages
Description
Extracts identifiers (like image names or trial numbers) from event messages to enable matching between start and end events.
Usage
extract_event_ids(events)
Arguments
events |
Data frame containing event messages |
Value
A vector of extracted identifiers
Access example EyeLink .asc binocular mock dataset file provided by the eyeris package.
Description
Returns the file path to the demo binocular .asc
EyeLink pupil data file
included in the eyeris
package.
Usage
eyelink_asc_binocular_demo_dataset()
Details
This dataset is a mock dataset trimmed from a larger data file. The original data file was obtained from: https://github.com/scott-huberty/eyelinkio/blob/main/src/eyelinkio/tests/data/test_raw_binocular.edf
Value
A character string giving the full file path to the demo .asc
EyeLink pupil data file
Examples
path_to_binocular_demo_dataset <- eyelink_asc_binocular_demo_dataset()
print(path_to_binocular_demo_dataset)
Access example EyeLink .asc demo dataset file provided by the eyeris package.
Description
Returns the file path to the demo .asc
EyeLink pupil data file
included in the eyeris
package.
Usage
eyelink_asc_demo_dataset()
Value
A character string giving the full file path to the demo .asc
EyeLink pupil data file
Examples
path_to_demo_dataset <- eyelink_asc_demo_dataset()
print(path_to_demo_dataset)
Run eyeris
commands with automatic logging of R console's stdout and stderr
Description
This utility function evaluates eyeris
commands while automatically
capturing and recording both standard output (stdout
) and standard error
(stderr
) to timestamped log files in your desired log directory.
Usage
eyelogger(
eyeris_cmd,
log_dir = file.path(tempdir(), "eyeris_logs"),
timestamp_format = "%Y%m%d_%H%M%S"
)
Arguments
eyeris_cmd |
An |
log_dir |
Character path to the desired log directory. Is set to the
temporary directory given by |
timestamp_format |
Format string passed to |
Details
Each run produces two log files:
-
<timestamp>.out
: records all console output -
<timestamp>.err
: records all warnings and errors
Value
The result of the evaluated eyeris
command (invisibly)
Examples
eyelogger({
message("eyeris `glassbox()` completed successfully.")
warning("eyeris `glassbox()` completed with warnings.")
print("some eyeris-related information.")
})
eyelogger({
glassbox(eyelink_asc_demo_dataset(), interactive_preview = FALSE)
}, log_dir = file.path(tempdir(), "eyeris_logs"))
Default color palette for eyeris plotting functions
Description
A custom color palette designed for visualizing pupil data preprocessing steps. This palette is based on the RColorBrewer Set1 palette and provides distinct, visually appealing colors for different preprocessing stages.
Usage
eyeris_color_palette()
Details
The palette includes 7 colors optimized for:
High contrast and visibility
Colorblind-friendly design
Consistent visual hierarchy across preprocessing steps
Professional appearance in reports and publications
Colors are designed to work well with both light and dark backgrounds and maintain readability when overlaid in time series plots.
Value
A character vector of 7 hex color codes representing the default eyeris color palette
Examples
# get the default color palette
colors <- eyeris_color_palette()
print(colors)
# use in a plot
plot(1:7, 1:7, col = colors, pch = 19, cex = 3)
Extract and aggregate eyeris data across subjects from database
Description
A comprehensive wrapper function that simplifies extracting eyeris
data from
the database. Provides easy one-liner access to aggregate data across multiple
subjects for each data type, without requiring SQL knowledge.
Usage
eyeris_db_collect(
bids_dir,
db_path = "my-project",
subjects = NULL,
data_types = NULL,
sessions = NULL,
tasks = NULL,
epoch_labels = NULL,
eye_suffixes = NULL,
verbose = TRUE
)
Arguments
bids_dir |
Path to the BIDS directory containing the database |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb") |
subjects |
Vector of subject IDs to include. If NULL (default), includes all subjects |
data_types |
Vector of data types to extract. If NULL (default), extracts all available types. Valid types: "blinks", "events", "timeseries", "epochs", "epoch_summary", "run_confounds", "confounds_events", "confounds_summary" |
sessions |
Vector of session IDs to include. If NULL (default), includes all sessions |
tasks |
Vector of task names to include. If NULL (default), includes all tasks |
epoch_labels |
Vector of epoch labels to include. If NULL (default), includes all epochs. Only applies to epoch-related data types |
eye_suffixes |
Vector of eye suffixes to include. If NULL (default), includes all eyes. Typically c("eye-L", "eye-R") for binocular data |
verbose |
Logical. Whether to print progress messages (default TRUE) |
Value
A named list of data frames, one per data type
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1),
label = "prePostProbe"
) |>
eyeris::bidsify(
bids_dir = tempdir(),
participant_id = "001",
session_num = "01",
task_name = "assocret",
run_num = "03", # override default run-01 (block_1) to use run-03 instead
db_enabled = TRUE # enable database storage
)
# extract all data for all subjects (returns list of data frames)
all_data <- eyeris_db_collect(tempdir())
# view available data types
names(all_data)
# access specific data type
blinks_data <- all_data$blinks
epochs_data <- all_data$epochs
# extract specific subjects and data types
subset_data <- eyeris_db_collect(
bids_dir = tempdir(),
subjects = c("001"),
data_types = c("blinks", "epochs", "timeseries")
)
# extract epoch data for specific epoch label
epoch_data <- eyeris_db_collect(
bids_dir = tempdir(),
data_types = "epochs",
epoch_labels = "prepostprobe"
)
Connect to eyeris project database (user-facing)
Description
User-friendly function to connect to an existing eyeris
project database.
This function provides easy access for users to query their eyeris
data.
Usage
eyeris_db_connect(bids_dir, db_path = "my-project")
Arguments
bids_dir |
Path to the BIDS directory containing the database |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb")
If just a filename, will look in |
Value
Database connection object for use with other eyeris database functions
Examples
# step 1: create a database using bidsify with db_enabled = TRUE
# (This example assumes you have already run bidsify to create a database)
# temp dir for testing
temp_dir <- tempdir()
# step 2: connect to eyeris DB (will fail gracefully if no DB exists)
tryCatch({
con <- eyeris_db_connect(temp_dir)
tables <- eyeris_db_list_tables(con)
# read timeseries data for a specific subject
data <- eyeris_db_read(con, data_type = "timeseries", subject = "001")
# close connection when done
eyeris_db_disconnect(con)
}, error = function(e) {
message("No eyeris DB found - create one first with bidsify(db_enabled = TRUE)")
})
Disconnect from eyeris database (user-facing)
Description
User-friendly function to disconnect from the eyeris
project database.
Usage
eyeris_db_disconnect(con)
Arguments
con |
Database connection object |
Value
Logical indicating success
List available tables in eyeris database
Description
Lists all tables in the eyeris
project database with optional filtering.
Usage
eyeris_db_list_tables(con, data_type = NULL, subject = NULL)
Arguments
con |
Database connection |
data_type |
Optional filter by data type |
subject |
Optional filter by subject ID |
Value
Character vector of table names
Read eyeris data from database
Description
Reads eyeris
data from the project database with dplyr-style interface.
Usage
eyeris_db_read(
con,
data_type = NULL,
subject = NULL,
session = NULL,
task = NULL,
run = NULL,
eye_suffix = NULL,
epoch_label = NULL,
table_name = NULL
)
Arguments
con |
Database connection |
data_type |
Type of data to read ("timeseries", "epochs", "epoch_timeseries", "epoch_summary", "events", "blinks") |
subject |
Optional subject ID filter |
session |
Optional session ID filter |
task |
Optional task name filter |
run |
Optional run number filter |
eye_suffix |
Optional eye suffix filter |
epoch_label |
Optional epoch label filter (for epoched data) |
table_name |
Exact table name (overrides other parameters) |
Value
Data frame with requested data
Reconstruct eyerisdb from chunked files
Description
Merges multiple chunked eyerisdb files back into a single database file.
Uses the reconstruction metadata file created by eyeris_db_split_for_sharing()
to ensure proper reconstruction.
Usage
eyeris_db_reconstruct_from_chunks(
chunked_dir,
output_path,
reconstruction_file = NULL,
verbose = TRUE
)
Arguments
chunked_dir |
Directory containing the chunked database files and reconstruction metadata |
output_path |
Full path for the reconstructed database (e.g., "/path/to/reconstructed.eyerisdb") |
reconstruction_file |
Path to the reconstruction metadata JSON file. If NULL (default), searches for "*_reconstruction_info.json" in chunked_dir |
verbose |
Whether to print progress messages (default: TRUE) |
Value
List containing information about the reconstruction process
Examples
## Not run:
# Reconstruct database from chunked files
reconstruction_info <- eyeris_db_reconstruct_from_chunks(
chunked_dir = "/path/to/chunked_db/project-name",
output_path = "/path/to/reconstructed-project.eyerisdb"
)
# Specify custom reconstruction file location
reconstruction_info <- eyeris_db_reconstruct_from_chunks(
chunked_dir = "/path/to/chunked_db/project-name",
output_path = "/path/to/reconstructed-project.eyerisdb",
reconstruction_file = "/path/to/custom_reconstruction_info.json"
)
## End(Not run)
Split eyerisdb for data sharing and distribution
Description
Creates multiple smaller eyerisdb files from a single large database for easier distribution via platforms with file size limits (GitHub, OSF, data repositories, etc.). Data can be chunked by data type, by number of chunks, or by maximum file size. Includes metadata to facilitate reconstruction of the original database.
Usage
eyeris_db_split_for_sharing(
bids_dir,
db_path = "my-project",
output_dir = NULL,
chunk_strategy = "by_data_type",
n_chunks = 4,
max_chunk_size_mb = 100,
data_types = NULL,
group_by_epoch_label = TRUE,
include_metadata = TRUE,
verbose = TRUE
)
Arguments
bids_dir |
Path to the BIDS directory containing the source database |
db_path |
Source database name (defaults to "my-project", becomes "my-project.eyerisdb") |
output_dir |
Directory to save chunked databases (defaults to bids_dir/derivatives/chunked_db) |
chunk_strategy |
Strategy for chunking: "by_data_type", "by_count", or "by_size" (default: "by_data_type") |
n_chunks |
Number of chunks to create when chunk_strategy = "by_count" (default: 4) |
max_chunk_size_mb |
Maximum size per chunk in MB when chunk_strategy = "by_size" (default: 100) |
data_types |
Vector of data types to include. If NULL (default), includes all available |
group_by_epoch_label |
If TRUE (default), processes epoch-related data types separately by epoch label |
include_metadata |
Whether to include eyeris metadata columns in chunked databases (default: TRUE) |
verbose |
Whether to print progress messages (default: TRUE) |
Value
List containing information about created chunked databases and reconstruction instructions
Examples
## Not run:
# These examples require an existing eyeris database
# Chunk by data type (each data type gets its own database file)
chunk_info <- eyeris_db_split_for_sharing(
bids_dir = "/path/to/bids",
db_path = "large-project",
chunk_strategy = "by_data_type"
)
# Chunk into 6 files by count
chunk_info <- eyeris_db_split_for_sharing(
bids_dir = "/path/to/bids",
db_path = "large-project",
chunk_strategy = "by_count",
n_chunks = 6
)
# Chunk by size (max 50MB per file)
chunk_info <- eyeris_db_split_for_sharing(
bids_dir = "/path/to/bids",
db_path = "large-project",
chunk_strategy = "by_size",
max_chunk_size_mb = 50
)
## End(Not run)
Get summary statistics for eyeris database
Description
Provides a quick overview of the contents of an eyeris
database,
including available subjects, sessions, tasks, and data types.
Usage
eyeris_db_summary(bids_dir, db_path = "my-project", verbose = TRUE)
Arguments
bids_dir |
Path to the BIDS directory containing the database |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb") |
verbose |
Logical. Whether to print detailed output (default TRUE) |
Value
A named list containing summary information about the database contents
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1),
label = "prePostProbe"
) |>
eyeris::bidsify(
bids_dir = file.path(tempdir(), "my-cool-memory-project"),
participant_id = "001",
session_num = "01",
task_name = "assocret",
run_num = "03", # override default run-01 (block_1) to use run-03 instead
db_enabled = TRUE,
db_path = "my-cool-memory-study",
)
# get database summary
summary <- eyeris_db_summary(
file.path(
tempdir(),
"my-cool-memory-project"
),
db_path = "my-cool-memory-study"
)
# view available subjects
summary$subjects
# view available data types
summary$data_types
# view table counts
summary$table_counts
Export eyeris database to chunked files
Description
High-level wrapper function to export large eyeris databases to chunked CSV or Parquet files by data type. Uses chunked processing to handle very large datasets without memory issues.
Usage
eyeris_db_to_chunked_files(
bids_dir,
db_path = "my-project",
output_dir = NULL,
chunk_size = 1e+06,
file_format = "csv",
data_types = NULL,
subjects = NULL,
max_file_size_mb = 50,
group_by_epoch_label = TRUE,
verbose = TRUE
)
Arguments
bids_dir |
Path to the BIDS directory containing the database |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb") |
output_dir |
Directory to save output files (defaults to bids_dir/derivatives/eyerisdb_export) |
chunk_size |
Number of rows to process per chunk (default: 1000000) |
file_format |
Output format: "csv" or "parquet" (default: "csv") |
data_types |
Vector of data types to export. If NULL (default), exports all available |
subjects |
Vector of subject IDs to include. If NULL (default), includes all subjects |
max_file_size_mb |
Maximum file size in MB per output file (default: 50). When exceeded, automatically creates numbered files (e.g., data_01-of-03.csv, data_02-of-03.csv) |
group_by_epoch_label |
If TRUE (default), processes epoch-related data types separately by epoch label to reduce memory footprint and produce label-specific files. When FALSE, epochs with different labels are merged into single large files (not recommended). |
verbose |
Whether to print progress messages (default: TRUE) |
Value
List containing information about exported files
Examples
## Not run:
# These examples require an existing eyeris database
# Export entire database to CSV files
if (file.exists(file.path(tempdir(), "derivatives", "large-project.eyerisdb"))) {
export_info <- eyeris_db_to_chunked_files(
bids_dir = tempdir(),
db_path = "large-project",
chunk_size = 50000,
file_format = "csv"
)
}
# Export specific data types to Parquet
if (file.exists(file.path(tempdir(), "derivatives", "large-project.eyerisdb"))) {
export_info <- eyeris_db_to_chunked_files(
bids_dir = tempdir(),
db_path = "large-project",
data_types = c("timeseries", "events"),
file_format = "parquet",
chunk_size = 75000
)
}
## End(Not run)
Split eyeris database into N parquet files by data type
Description
Utility function that takes an eyerisdb DuckDB database and splits it into N reasonably sized parquet files for easy management with GitHub, downloading, and distribution. Data is first grouped by table type (timeseries, epochs, events, etc.) since each has different columnar structures, then each group is split into the specified number of files. Files are organized in folders matching the database name for easy identification.
Usage
eyeris_db_to_parquet(
bids_dir,
db_path = "my-project",
n_files_per_type = 1,
output_dir = NULL,
max_file_size = 512,
data_types = NULL,
verbose = TRUE,
include_metadata = TRUE,
epoch_labels = NULL,
group_by_epoch_label = TRUE
)
Arguments
bids_dir |
Path to the BIDS directory containing the database |
db_path |
Database name (defaults to "my-project", becomes "my-project.eyerisdb") |
n_files_per_type |
Number of parquet files to create per data type (default: 1) |
output_dir |
Directory to save parquet files (defaults to bids_dir/derivatives/parquet) |
max_file_size |
Maximum file size in MB per parquet file (default: 512) Used as a constraint when n_files_per_type would create files larger than this |
data_types |
Vector of data types to include. If NULL (default), includes all available. Valid types: "timeseries", "epochs", "epoch_summary", "events", "blinks", "confounds_*" |
verbose |
Whether to print progress messages (default: TRUE) |
include_metadata |
Whether to include eyeris metadata columns in output (default: TRUE) |
epoch_labels |
Optional character vector of epoch labels to include (e.g., "prepostprobe"). Only applies to epoch-related data types. If NULL, includes all labels. |
group_by_epoch_label |
If TRUE, processes epoch-related data types separately by epoch label to reduce memory footprint and produce label-specific parquet files (default: TRUE). |
Value
List containing information about created parquet files
Database Safety
This function creates temporary tables during parquet export when the arrow package is not available. All temporary tables are automatically cleaned up, but if the process crashes, leftover tables may remain. The function checks for and warns about existing temporary tables before starting.
Examples
# create demo database
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{startstop}_{trial}",
limits = c(-1, 1),
label = "prePostProbe"
) |>
eyeris::bidsify(
bids_dir = tempdir(),
participant_id = "001",
session_num = "01",
task_name = "memory",
db_enabled = TRUE,
db_path = "memory-task"
)
# split into 3 parquet files per data type - creates memory-task/ folder
split_info <- eyeris_db_to_parquet(
bids_dir = tempdir(),
db_path = "memory-task",
n_files_per_type = 3
)
# split with size constraint and specific data types using the same database
split_info <- eyeris_db_to_parquet(
bids_dir = tempdir(),
db_path = "memory-task",
n_files_per_type = 5,
max_file_size = 50, # max 50MB per file
data_types = c("timeseries", "epochs", "events")
)
Filter epoch names from eyeris object
Description
Extracts names of epoch-related elements from an eyeris
object.
Usage
filter_epochs(eyeris, epochs)
Arguments
eyeris |
An |
epochs |
A vector of epoch names to filter |
Value
A character vector of epoch names that start with "epoch_"
Find baseline structure name for a given epoch
Description
Helper function to find the correct baseline structure name that matches
the complex baseline naming scheme used by eyeris
.
Usage
find_baseline_structure(eyeris, epoch_label, verbose = TRUE)
Arguments
eyeris |
An object of class |
epoch_label |
The epoch label (without "epoch_" prefix) |
verbose |
Logical. Whether to print detailed output (default TRUE) |
Value
The baseline structure name or NULL
if not found
Format call stack information for display
Description
Converts call stack information into a formatted data frame for display.
Usage
format_call_stack(callstack)
Arguments
callstack |
A list of call stack information |
Value
A data frame with formatted call stack information
Extract block numbers from eyeris object or character vector
Description
Extracts numeric block numbers from block names or an eyeris
object.
Usage
get_block_numbers(x)
Arguments
x |
Either a character vector of block names or an |
Value
A numeric vector of block numbers, defaults to 1 if no blocks found
Calculate confounds for a single pupil data step
Description
Computes various metrics from pupil data including:
Blink detection
Gaze on/off screen detection
Gap analysis
Gaze distance from screen center
Gaze variance
Blink rate
Blink duration
Blink time
Usage
get_confounds_for_step(pupil_df, pupil_vec, screen_width, screen_height, hz)
Arguments
pupil_df |
A data frame containing pupil data |
pupil_vec |
A vector of pupil data for the current step |
screen_width |
The screen width in pixels |
screen_height |
The screen height in pixels |
hz |
The sampling rate in Hz |
Value
A data frame containing confounds metrics for the current step
Get formatted timestamp for logging
Description
Get formatted timestamp for logging
Usage
get_log_timestamp()
Value
Character string with current timestamp
Obtain timestamps from events data
Description
Extracts start and end timestamps from events data based on message patterns.
Usage
get_timestamps(
evs,
timestamped_events,
msg_s,
msg_e,
limits,
baseline_mode = FALSE
)
Arguments
evs |
Event messages or list of events |
timestamped_events |
Events data frame with timestamps |
msg_s |
Start message pattern |
msg_e |
End message pattern |
limits |
Time limits for wildcard mode |
baseline_mode |
Whether in baseline calculation mode |
Value
A list containing start and end timestamps
The opinionated "glass box" eyeris
pipeline
Description
This glassbox
function (in contrast to a "black box" function where you run
it and get a result but have no (or little) idea as to how you got from input
to output) has a few primary benefits over calling each exported function
from eyeris
separately.
Usage
glassbox(
file,
interactive_preview = FALSE,
preview_n = 3,
preview_duration = 5,
preview_window = NULL,
verbose = TRUE,
...,
confirm = deprecated(),
num_previews = deprecated(),
detrend_data = deprecated(),
skip_detransient = deprecated()
)
Arguments
file |
An SR Research EyeLink |
interactive_preview |
A flag to indicate whether to run the |
preview_n |
Number of random example "epochs" to generate for previewing the effect of each preprocessing step on the pupil time series |
preview_duration |
Time in seconds of each randomly selected preview |
preview_window |
The start and stop raw timestamps used to subset the
preprocessed data from each step of the |
verbose |
A logical flag to indicate whether to print status messages to
the console. Defaults to |
... |
Additional arguments to override the default, prescribed settings |
confirm |
(Deprecated) Use |
num_previews |
(Deprecated) Use |
detrend_data |
(Deprecated) A flag to indicate whether to run the
|
skip_detransient |
(Deprecated) A flag to indicate whether to skip
the |
Details
First, this glassbox
function provides a highly opinionated prescription of
steps and starting parameters we believe any pupillometry researcher should
use as their defaults when preprocessing pupillometry data.
Second, and not mutually exclusive from the first point, using this function should ideally reduce the probability of accidental mishaps when "reimplementing" the steps from the preprocessing pipeline both within and across projects. We hope to streamline the process in such a way that you could collect a pupillometry dataset and within a few minutes assess the quality of those data while simultaneously running a full preprocessing pipeline in 1-ish line of code!
Third, glassbox
provides an "interactive" framework where you can evaluate
the consequences of the parameters within each step on your data in real
time, facilitating a fairly easy-to-use workflow for parameter optimization
on your particular dataset. This process essentially takes each of the
opinionated steps and provides a pre-/post-plot of the time series data for
each step so you can adjust parameters and re-run the pipeline until you are
satisfied with the choices of your parameters and their consequences on your
pupil time series data.
Value
Preprocessed pupil data contained within an object of class eyeris
See Also
Examples
demo_data <- eyelink_asc_demo_dataset()
# (1) examples using the default prescribed parameters and pipeline recipe
## (a) run an automated pipeline with no real-time inspection of parameters
output <- eyeris::glassbox(demo_data)
start_time <- min(output$timeseries$block_1$time_secs)
end_time <- max(output$timeseries$block_1$time_secs)
# by default, verbose = TRUE. To suppress messages, set verbose = FALSE.
plot(
output,
steps = c(1, 5),
preview_window = c(start_time, end_time),
seed = 0
)
## (b) run a interactive workflow (with confirmation prompts after each step)
output <- eyeris::glassbox(demo_data, interactive_preview = TRUE, seed = 0)
# (2) examples of overriding the default parameters
output <- eyeris::glassbox(
demo_data,
interactive_preview = FALSE, # TRUE to visualize each step in real-time
deblink = list(extend = 40),
lpfilt = list(plot_freqz = TRUE) # overrides verbose parameter
)
# to suppress messages, set verbose = FALSE in plot():
plot(output, seed = 0, verbose = FALSE)
# (3) examples of disabling certain steps
output <- eyeris::glassbox(
demo_data,
detransient = FALSE,
detrend = FALSE,
zscore = FALSE
)
plot(output, seed = 0)
Internal glassbox function for processing individual eyes
Description
Internal glassbox function for processing individual eyes
Usage
glassbox_internal(
file,
interactive_preview = FALSE,
preview_n = 3,
preview_duration = 5,
preview_window = NULL,
verbose = TRUE,
params,
original_call,
seed
)
Arguments
file |
The |
interactive_preview |
A flag to indicate whether to show interactive previews |
preview_n |
Number of preview epochs |
preview_duration |
Duration of each preview in seconds |
preview_window |
Preview window specification |
verbose |
A flag to indicate whether to show verbose output |
params |
A list of pipeline step parameters |
original_call |
The original call to the glassbox function |
seed |
A random seed for reproducible plotting |
Value
An eyeris
object with the processed data lists
Index metadata from data frame
Description
Extracts a single row of metadata from a data frame.
Usage
index_metadata(x, i)
Arguments
x |
The data frame to index |
i |
The row index |
Value
A single row from the data frame
Interpolate missing pupil samples
Description
Linear interpolation of time series data. The intended use of this method
is for filling in missing pupil samples (NAs) in the time series. This method
uses "na.approx()" function from the zoo package, which implements linear
interpolation using the "approx()" function from the stats package.
Currently, NAs at the beginning and the end of the data are replaced with
values on either end, respectively, using the "rule = 2" argument in the
approx()
function.
Usage
interpolate(eyeris, verbose = TRUE, call_info = NULL)
Arguments
eyeris |
An object of class |
verbose |
A flag to indicate whether to print detailed logging messages.
Defaults to |
call_info |
A list of call information and parameters. If not provided, it will be generated from the function call |
Details
This function is automatically called by glassbox()
by default. Use
glassbox(interpolate = FALSE)
to disable this step as needed.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
Value
An eyeris
object with a new column in timeseries
:
pupil_raw_{...}_interpolate
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Use glassbox(interpolate = TRUE)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris
glassbox preprocessing pipeline.
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
# set to FALSE to skip (not recommended)
eyeris::glassbox(interpolate = TRUE) |>
plot(seed = 0)
Interpolate missing pupil data using linear interpolation
Description
This function fills missing values (NAs) in pupil data using linear
interpolation. It uses the zoo::na.approx()
function with settings
optimized for pupillometry data.
Usage
interpolate_pupil(x, prev_op, verbose)
Arguments
x |
A data frame containing the pupil time series data |
prev_op |
The name of the previous operation's output column |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the exposed wrapper interpolate()
.
Value
A vector of interpolated pupil values with the same length as the input
Check if object is a binocular eyeris object
Description
Detects whether an object is a binocular eyeris
object created with
binocular_mode = "both"
.
Usage
is_binocular_object(x)
Arguments
x |
The |
Value
Logical indicating whether the object is a binocular eyeris
object
Load and parse SR Research EyeLink .asc
files
Description
This function builds upon the eyelinker::read.asc()
function to parse the
messages and metadata within the EyeLink .asc
file. After loading and
additional processing, this function returns an S3 eyeris
class for use in
all subsequent eyeris
pipeline steps and functions.
Usage
load_asc(
file,
block = "auto",
binocular_mode = c("average", "left", "right", "both"),
verbose = TRUE
)
Arguments
file |
An SR Research EyeLink |
block |
Optional block number specification. The following are valid options:
|
binocular_mode |
Optional binocular mode specification. The following are valid options:
|
verbose |
Logical. Whether to print verbose output (default TRUE). |
Details
This function is automatically called by glassbox()
by default. If
needed, customize the parameters for load_asc
by providing a parameter
list.
Users should prefer using glassbox()
rather than invoking this
function directly unless they have a specific reason to customize the
pipeline manually.
Value
An object of S3 class eyeris
with the following attributes:
-
file
: Path to the original.asc
file. -
timeseries
: Data frame of all raw time series data from the tracker. -
events
: Data frame of all event messages and their time stamps. -
blinks
: Data frame of all blink events. -
info
: Data frame of various metadata parsed from the file header. -
latest
:eyeris
variable for tracking pipeline run history.
For binocular data with binocular_mode = "both"
, returns a list
containing:
-
left
: Aneyeris
object for the left eye data. -
right
: Aneyeris
object for the right eye data. -
original_file
: Path to the original.asc
file.
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
load_asc = list(...)
.
Advanced users may call it directly if needed.
See Also
eyelinker::read.asc()
which this function wraps.
glassbox()
for the recommended way to run this step
as part of the full eyeris
glassbox preprocessing pipeline.
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox(load_asc = list(block = 1))
# Other useful parameter configurations
## (1) Basic usage (no block column specified)
demo_data |>
eyeris::load_asc()
## (2) Manual specification of block number
demo_data |>
eyeris::load_asc(block = 3)
## (3) Auto-detect multiple recording segments embedded within the same
## file (i.e., the default behavior)
demo_data |>
eyeris::load_asc(block = "auto")
## (4) Omit block column
demo_data |>
eyeris::load_asc(block = NULL)
Log an error message and abort
Description
Log an error message and abort
Usage
log_error(..., wrap = TRUE, .envir = parent.frame())
Arguments
... |
Character strings to log. Supports glue-style interpolation. |
wrap |
Logical. Whether to wrap long messages (default TRUE). |
.envir |
Environment for glue interpolation (default: parent frame). |
Examples
## Not run:
log_error("Critical error occurred")
file_path <- "missing.csv"
log_error("File not found: {file_path}")
## End(Not run)
Log an informational message
Description
Log an informational message
Usage
log_info(..., verbose = TRUE, wrap = TRUE, .envir = parent.frame())
Arguments
... |
Character strings to log. Supports glue-style interpolation. |
verbose |
Logical. Whether to print the message (default TRUE). |
wrap |
Logical. Whether to wrap long messages (default TRUE). |
.envir |
Environment for glue interpolation (default: parent frame). |
Examples
## Not run:
log_info("Processing file:", "data.csv")
subject_id <- "001"
log_info("Processing subject {subject_id}")
log_info("Found {nrow(data)} rows", "in dataset")
## End(Not run)
Core logging function with timestamp and glue support
Description
Core logging function with timestamp and glue support
Usage
log_message(level, ..., verbose = TRUE, wrap = TRUE, .envir = parent.frame())
Arguments
level |
Character string for log level (INFO, OKAY, WARN, EXIT) |
... |
Character strings to log |
verbose |
Logical. Whether to print the message |
wrap |
Logical. Whether to wrap long messages |
.envir |
Environment for glue interpolation |
Log a success message
Description
Log a success message
Usage
log_success(..., verbose = TRUE, wrap = TRUE, .envir = parent.frame())
Arguments
... |
Character strings to log. Supports glue-style interpolation. |
verbose |
Logical. Whether to print the message (default TRUE). |
wrap |
Logical. Whether to wrap long messages (default TRUE). |
.envir |
Environment for glue interpolation (default: parent frame). |
Examples
## Not run:
log_success("Processing completed successfully")
n_files <- 5
log_success("Processed {n_files} files successfully")
## End(Not run)
Log a warning message
Description
Log a warning message
Usage
log_warn(..., verbose = TRUE, wrap = TRUE, .envir = parent.frame())
Arguments
... |
Character strings to log. Supports glue-style interpolation. |
verbose |
Logical. Whether to print the message (default TRUE). |
wrap |
Logical. Whether to wrap long messages (default TRUE). |
.envir |
Environment for glue interpolation (default: parent frame). |
Examples
## Not run:
log_warn("Missing data detected")
missing_count <- 10
log_warn("Found {missing_count} missing values")
## End(Not run)
Standardized logging functions for eyeris
Description
These functions provide a consistent logging interface with automatic timestamping, glue-style string interpolation, and support for multiple string arguments.
Arguments
... |
Character strings to be logged. Will be collapsed with spaces. Supports glue-style interpolation with curly braces. |
verbose |
Logical. Whether to actually print the log message. |
wrap |
Logical. Whether to wrap long messages (default TRUE). |
.envir |
Environment for glue interpolation (default: parent frame). |
Lowpass filtering of time series data
Description
The intended use of this method is for smoothing, although by specifying
wp
and ws
differently one can achieve highpass or bandpass filtering
as well. However, only lowpass filtering should be done on pupillometry data.
Usage
lpfilt(
eyeris,
wp = 4,
ws = 8,
rp = 1,
rs = 35,
plot_freqz = FALSE,
call_info = NULL
)
Arguments
eyeris |
An object of class |
wp |
The end of passband frequency in Hz (desired lowpass cutoff).
Defaults to |
ws |
The start of stopband frequency in Hz (required lowpass cutoff).
Defaults to |
rp |
Required maximal ripple within passband in dB. Defaults to |
rs |
Required minimal attenuation within stopband in dB.
Defaults to |
plot_freqz |
A flag to indicate whether to display the filter frequency
response. Defaults to |
call_info |
A list of call information and parameters. If not provided,
it will be generated from the function call. Defaults to |
Details
This function is automatically called by glassbox()
by default. If needed,
customize the parameters for lpfilt
by providing a parameter list. Use
glassbox(lpfilt = FALSE)
to disable this step as needed.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
Value
An eyeris
object with a new column in time series
:
pupil_raw_{...}_lpfilt
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Provide parameters via
lpfilt = list(...)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris glassbox preprocessing pipeline
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
# set lpfilt to FALSE (instead of a list of params) to skip step
eyeris::glassbox(lpfilt = list(plot_freqz = TRUE)) |>
plot(seed = 0)
Internal function to lowpass filter pupil data
Description
This function lowpass filters pupil data using a Butterworth filter.
This function is called by the exposed wrapper lpfilt()
Usage
lpfilt_pupil(x, prev_op, wp, ws, rp, rs, fs, plot_freqz)
Arguments
x |
A data frame containing pupil data |
prev_op |
The name of the previous operation in the pipeline |
wp |
The end of passband frequency in Hz (desired lowpass cutoff) |
ws |
The start of stopband frequency in Hz (required lowpass cutoff) |
rp |
Required maximal ripple within passband in dB |
rs |
Required minimal attenuation within stopband in dB |
fs |
The sample rate of the data |
plot_freqz |
A flag to indicate whether to display the filter frequency response |
Value
A vector of filtered pupil data
Create baseline label for epoch data
Description
Generates a standardized label for baseline-corrected epoch data.
Usage
make_baseline_label(baselined_data, epoch_id)
Arguments
baselined_data |
A list containing baseline correction information |
epoch_id |
The identifier for the epoch |
Value
A character string with the baseline label
Make a BIDS-compatible filename
Description
Helper function to generate a BIDS-compatible filename based on the provided parameters.
Usage
make_bids_fname(
sub_id,
task_name,
run_num,
desc = "",
ses_id = NULL,
epoch_name = NULL,
epoch_events = NULL,
baseline_events = NULL,
baseline_type = NULL,
eye_suffix = NULL
)
Arguments
sub_id |
The subject ID |
task_name |
The task name |
run_num |
The run number |
desc |
The description |
ses_id |
The session ID |
epoch_name |
The epoch name |
epoch_events |
The epoch events |
baseline_events |
The baseline events |
baseline_type |
The baseline type |
eye_suffix |
The eye suffix |
Value
A BIDS-compatible filename
Generate epoch label from events and data
Description
Creates a standardized label for epoch data based on events or user-provided label.
Usage
make_epoch_label(evs, label, epoched_data)
Arguments
evs |
Event messages or list of events |
label |
User-provided label (optional) |
epoched_data |
List of epoched data for label generation |
Value
A character string with the epoch label
Create interactive epoch gallery report
Description
Generates an interactive HTML gallery report for epoch data with lightbox functionality.
Usage
make_gallery(eyeris, epochs, out, epoch_name, ...)
Arguments
eyeris |
An |
epochs |
Vector of epoch plot file paths or path to zip file |
out |
Output directory for the report |
epoch_name |
Name of the epoch for the report |
... |
Additional parameters passed from bidsify |
Value
No return value; creates and renders an HTML gallery report
Create markdown table from data frame
Description
Converts a data frame into a markdown table.
Usage
make_md_table(df)
Arguments
df |
The data frame to convert |
Value
A character string containing the markdown table content
Create multiline markdown table from data frame
Description
Converts a data frame into a multiline markdown table.
Usage
make_md_table_multiline(df)
Arguments
df |
The data frame to convert |
Value
A character string containing the markdown table content
Create progressive preprocessing summary plot
Description
Internal function to create a comprehensive visualization showing the progressive effects of preprocessing steps on pupil data. This plot displays multiple preprocessing stages overlaid on the same time series, allowing users to see how each step modifies the pupil signal.
Usage
make_prog_summary_plot(
pupil_data,
pupil_steps,
preview_n = 3,
plot_params = list(),
run_id = "run-01",
cex = 2,
eye_suffix = NULL
)
Arguments
pupil_data |
A data frame containing pupil time series data with
multiple preprocessing columns (e.g., |
pupil_steps |
Character vector of column names containing pupil data
at different preprocessing stages
(e.g., |
preview_n |
Number of columns for subplot layout. Defaults to |
plot_params |
Named list of additional parameters to forward to plotting
functions. Defaults to |
run_id |
Character string identifying the run/block (e.g., "run-01").
Used for plot titles and file naming. Defaults to |
cex |
Character expansion factor for plot elements. Defaults to |
eye_suffix |
Optional eye suffix for binocular data |
Details
This function creates a two-panel visualization:
Top panel: Overlaid time series showing progressive preprocessing effects with different colors for each step
Bottom panel: Legend identifying each preprocessing step
The plot excludes z-scored data (columns ending with "_z") and only includes steps with sufficient valid data points (>100). Each preprocessing step is displayed with a distinct color, making it easy to see how the signal changes through the pipeline.
Value
NULL (invisibly). Creates a plot showing progressive preprocessing effects with multiple layers overlaid on the same time series
See Also
Create eyeris report
Description
Generates a comprehensive HTML report for eyeris
preprocessing results.
Usage
make_report(eyeris, out, plots, eye_suffix = NULL, ...)
Arguments
eyeris |
An |
out |
Output directory for the report |
plots |
Vector of plot file paths to include in the report |
eye_suffix |
Optional eye suffix (e.g., "eye-L", "eye-R") for binocular data |
... |
Additional parameters passed from bidsify |
Value
Path to the generated R Markdown
file
Process event messages and merge with time series
Description
Matches event messages against templates and extracts metadata, supporting both exact matches and pattern matching with wildcards.
Usage
merge_events_with_timeseries(events, metadata_template, merge = TRUE)
Arguments
events |
Events data frame with timestamps and messages |
metadata_template |
Template pattern to match against |
merge |
Whether to merge results (default: |
Value
A data frame with matched events and extracted metadata
Merge temporary database into main database
Description
Safely merges data from a temporary database into the main project database using file locking to prevent concurrent access issues. This function handles the transactional copying of all tables from the temporary database.
Usage
merge_temp_database(
temp_db_info,
verbose = FALSE,
max_retries = 10,
retry_delay = 1
)
Arguments
temp_db_info |
List containing temp database connection and paths |
verbose |
Whether to print verbose output |
max_retries |
Maximum number of retry attempts for file locking |
retry_delay |
Delay between retry attempts in seconds |
Value
Logical indicating success
Normalize gaze coordinates to screen-relative units
Description
Transforms raw gaze coordinates (in pixels) to normalized coordinates where:
(0,0) represents the center of the screen
Coordinates are scaled to [-1,1] range
Also calculates the normalized distance from screen center
Usage
normalize_gaze_coords(pupil_df, screen_width, screen_height)
Arguments
pupil_df |
A data frame containing raw gaze
coordinates ( |
screen_width |
The screen width in pixels |
screen_height |
The screen height in pixels |
Value
A data frame with added columns:
-
eye_x_norm
: Normalized x coordinate [-1,1] -
eye_y_norm
: Normalized y coordinate [-1,1] -
gaze_dist_from_center
: Normalized distance from screen center
Parse call stack information
Description
Extracts function name and arguments from a call string.
Usage
parse_call_stack(call_str)
Arguments
call_str |
A string representation of a function call |
Value
A list containing the function name and full call string
Parse EyeLink version and model information
Description
Extracts and cleans version and model information from EyeLink metadata.
Usage
parse_eyelink_info(version_str, model = NA)
Arguments
version_str |
The version string from EyeLink metadata |
model |
The model string from EyeLink metadata (default: NA) |
Value
A list containing cleaned version and model strings
Build a generic operation (extension) for the eyeris
pipeline
Description
pipeline_handler
enables flexible integration of custom data
processing functions into the eyeris
pipeline. Under the hood,
each preprocessing function in eyeris
is a wrapper around a
core operation that gets tracked, versioned, and stored using this
pipeline_handler
method. As such, custom pipeline steps must conform
to the eyeris
protocol for maximum compatibility with the downstream
functions we provide.
Usage
pipeline_handler(eyeris, operation, new_suffix, ...)
Arguments
eyeris |
An object of class |
operation |
The name of the function to apply to the time series data.
This custom function should accept a data frame |
new_suffix |
A character string indicating the suffix you would like to be appended to the name of the previous operation's column, which will be used for the new column name in the updated preprocessed data frame(s) |
... |
Additional (optional) arguments passed to the |
Details
Following the eyeris
protocol also ensures:
all operations follow a predictable structure, and
that new pupil data columns based on previous operations in the chain are able to be dynamically constructed within the core time series data frame.
Value
An updated eyeris
object with the new column added to the
timeseries
data frame and the latest
pointer updated to the name of the
most recently added column plus all previous columns (ie, the history "trace"
of preprocessing steps from start-to-present)
See Also
For more details, please check out the following vignettes:
Anatomy of an eyeris Object
vignette("anatomy", package = "eyeris")
Building Your Own Custom Pipeline Extensions
vignette("custom-extensions", package = "eyeris")
Examples
# first, define your custom data preprocessing function
winsorize_pupil <- function(x, prev_op, lower = 0.01, upper = 0.99) {
vec <- x[[prev_op]]
q <- quantile(vec, probs = c(lower, upper), na.rm = TRUE)
vec[vec < q[1]] <- q[1]
vec[vec > q[2]] <- q[2]
vec
}
# second, construct your `pipeline_handler` method wrapper
winsorize <- function(eyeris, lower = 0.01, upper = 0.99, call_info = NULL) {
# create call_info if not provided
call_info <- if (is.null(call_info)) {
list(
call_stack = match.call(),
parameters = list(lower = lower, upper = upper)
)
} else {
call_info
}
# handle binocular objects
if (eyeris:::is_binocular_object(eyeris)) {
# process left and right eyes independently
left_result <- eyeris$left |>
pipeline_handler(
winsorize_pupil,
"winsorize",
lower = lower,
upper = upper,
call_info = call_info
)
right_result <- eyeris$right |>
pipeline_handler(
winsorize_pupil,
"winsorize",
lower = lower,
upper = upper,
call_info = call_info
)
# return combined structure
list_out <- list(
left = left_result,
right = right_result,
original_file = eyeris$original_file,
raw_binocular_object = eyeris$raw_binocular_object
)
class(list_out) <- "eyeris"
return(list_out)
} else {
# regular eyeris object, process normally
eyeris |>
pipeline_handler(
winsorize_pupil,
"winsorize",
lower = lower,
upper = upper,
call_info = call_info
)
}
}
# and voilĂ , you can now connect your custom extension
# directly into your custom `eyeris` pipeline definition!
custom_eye <- system.file("extdata", "memory.asc", package = "eyeris") |>
eyeris::load_asc(block = "auto") |>
eyeris::deblink(extend = 50) |>
winsorize()
plot(custom_eye, seed = 1)
Plot pre-processed pupil data from eyeris
Description
S3 plotting method for objects of class eyeris
. Plots a single-panel
timeseries for a subset of the pupil time series at each preprocessing step.
The intended use of this function is to provide a simple method for
qualitatively assessing the consequences of the preprocessing recipe and
parameters on the raw pupillary signal.
Usage
## S3 method for class 'eyeris'
plot(
x,
...,
steps = NULL,
preview_n = NULL,
preview_duration = NULL,
preview_window = NULL,
seed = NULL,
block = 1,
plot_distributions = FALSE,
suppress_prompt = TRUE,
verbose = TRUE,
add_progressive_summary = FALSE,
eye = c("left", "right", "both"),
num_previews = deprecated()
)
Arguments
x |
An object of class |
... |
Additional arguments to be passed to |
steps |
Which steps to plot; defaults to |
preview_n |
Number of random example "epochs" to generate for previewing the effect of each preprocessing step on the pupil time series |
preview_duration |
Time in seconds of each randomly selected preview |
preview_window |
The start and stop raw timestamps used to subset the
preprocessed data from each step of the |
seed |
Random seed for current plotting session. Leave NULL to select
|
block |
For multi-block recordings, specifies which block to plot.
Defaults to 1. When a single |
plot_distributions |
Logical flag to indicate whether to plot both
diagnostic pupil time series and accompanying histograms of the pupil
samples at each processing step. Defaults to |
suppress_prompt |
Logical flag to disable interactive confirmation
prompts during plotting. Defaults to |
verbose |
A logical flag to indicate whether to print status messages to
the console. Defaults to |
add_progressive_summary |
Logical flag to indicate whether to add a
progressive summary plot after plotting. Defaults to |
eye |
For binocular data, specifies which eye to plot: "left", "right", or "both". Defaults to "left". For "both", currently plots left eye data (use eye="right" for right eye data) |
num_previews |
(Deprecated) Use |
Value
No return value; iteratively plots a subset of the pupil time series from each preprocessing step run
See Also
Examples
# first, generate the preprocessed pupil data
my_eyeris_data <- system.file("extdata", "memory.asc", package = "eyeris") |>
eyeris::load_asc() |>
eyeris::deblink(extend = 50) |>
eyeris::detransient() |>
eyeris::interpolate() |>
eyeris::lpfilt(plot_freqz = TRUE) |>
eyeris::zscore()
# controlling the time series range (i.e., preview window) in your plots:
## example 1: using the default 10000 to 20000 ms time subset
plot(my_eyeris_data, seed = 0, add_progressive_summary = TRUE)
## example 2: using a custom time subset (i.e., 1 to 500 ms)
plot(
my_eyeris_data,
preview_window = c(0.01, 0.5),
seed = 0,
add_progressive_summary = TRUE
)
# controlling which block of data you would like to plot:
## example 1: plots first block (default)
plot(my_eyeris_data, seed = 0)
## example 2: plots a specific block
plot(my_eyeris_data, block = 1, seed = 0)
## example 3: plots a specific block along with a custom preview window
## (i.e., 1000 to 2000 ms)
plot(
my_eyeris_data,
block = 1,
preview_window = c(1, 2),
seed = 0
)
Plot binocular correlation between left and right eye data
Description
Creates correlation plots showing the relationship between left and right eye measurements for pupil size, x-coordinates, and y-coordinates. This function is useful for validating binocular data quality and assessing the correlation between the two eyes.
Usage
plot_binocular_correlation(
eyeris,
block = 1,
variables = c("pupil", "x", "y"),
main = "",
col_palette = "viridis",
sample_rate = NULL,
verbose = TRUE
)
Arguments
eyeris |
An object of class |
block |
Block number to plot (default: 1) |
variables |
Variables to plot correlations for. Defaults to
|
main |
Title for the overall plot (default: "Binocular Correlation") |
col_palette |
Color palette for the plots (default: "viridis") |
sample_rate |
Sample rate in Hz (optional, for time-based sampling) |
verbose |
Logical flag to indicate whether to print status messages (default: TRUE) |
Value
No return value; creates correlation plots
Examples
# For binocular data loaded with binocular_mode = "both"
binocular_data <- load_asc(eyelink_asc_binocular_demo_dataset(), binocular_mode = "both")
plot_binocular_correlation(binocular_data)
# For binocular data loaded with binocular_mode = "average"
# (correlation plot will show original left vs right before averaging)
avg_data <- load_asc(eyelink_asc_binocular_demo_dataset(), binocular_mode = "average")
plot_binocular_correlation(avg_data$raw_binocular_object)
Internal helper to plot detrending overlay
Description
This function replicates the exact detrending visualization from the
glassbox()
interactive preview mode. It uses robust_plot()
to show the
most recent detrended pupil signal overlaid with the fitted linear trend.
Usage
plot_detrend_overlay(
pupil_data,
pupil_steps,
preview_n = preview_n,
plot_params = list(),
suppress_prompt = TRUE
)
Arguments
pupil_data |
A single block of pupil time series data
(e.g. |
preview_n |
Number of columns for |
plot_params |
A named list of additional parameters to forward to
|
suppress_prompt |
Logical. Whether to skip prompting. Default = TRUE. |
Value
Logical indicating whether detrend overlay was plotted successfully
Create gaze heatmap of eye coordinates
Description
Creates a heatmap showing the distribution of eye_x and eye_y coordinates across the entire screen area. The heatmap shows where the participant looked most frequently during the recording period.
Usage
plot_gaze_heatmap(
eyeris,
block = 1,
screen_width = NULL,
screen_height = NULL,
n_bins = 50,
col_palette = "viridis",
main = "Gaze Heatmap",
xlab = "Screen X (pixels)",
ylab = "Screen Y (pixels)",
sample_rate = NULL,
eye_suffix = NULL
)
Arguments
eyeris |
An object of class |
block |
Block number to plot (default: 1) |
screen_width |
Screen width in pixels from |
screen_height |
Screen height in pixels from |
n_bins |
Number of bins for the heatmap grid (default: 50) |
col_palette |
Color palette for the heatmap (default: "viridis") |
main |
Title for the plot (default: "Fixation Heatmap") |
xlab |
X-axis label (default: "Screen X (pixels)") |
ylab |
Y-axis label (default: "Screen Y (pixels)") |
sample_rate |
Sample rate in Hz (optional) |
eye_suffix |
Eye suffix for binocular data (default: NULL) |
Value
No return value; creates a heatmap plot
Examples
demo_data <- eyelink_asc_demo_dataset()
eyeris_preproc <- glassbox(demo_data)
plot_gaze_heatmap(eyeris = eyeris_preproc, block = 1)
Plot pupil distribution histogram
Description
Creates a histogram of pupil size distribution with customizable parameters.
Usage
plot_pupil_distribution(data, color, main, xlab, backuplab = NULL)
Arguments
data |
The pupil data to plot |
color |
The color for the histogram bars |
main |
The main title for the plot |
xlab |
The x-axis label |
backuplab |
A backup label if xlab is NULL |
Value
No return value; creates a histogram plot
Plot with seed handling for glassbox pipeline
Description
Internal function to handle plotting with consistent seed management for the glassbox pipeline interactive previews.
Usage
plot_with_seed(
file,
step_counter,
seed,
preview_n,
preview_duration,
preview_window,
only_linear_trend,
next_step,
block_name = NULL,
verbose = TRUE
)
Arguments
file |
The |
step_counter |
Current step counter |
seed |
A random seed for reproducible plotting |
preview_n |
Number of preview epochs |
preview_duration |
Duration of each preview in seconds |
preview_window |
Preview window specification |
only_linear_trend |
A flag to indicate whether to show only linear trend |
next_step |
Next step information |
block_name |
Block name (optional, for multi-block processing) |
verbose |
A flag to indicate whether to show verbose output |
Print lightbox image HTML for zip-based gallery
Description
Generates HTML code for lightbox image gallery functionality that loads images from zip files using zip.js.
Usage
print_lightbox_img_html(zip_path, image_filenames = NULL, verbose = TRUE)
Arguments
zip_path |
Path to the zip file containing images (can be absolute or relative) |
image_filenames |
Vector of image filenames within the zip |
verbose |
Logical. Whether to print verbose output (default TRUE). |
Value
A character string containing HTML code for the lightbox gallery
Print lightbox image HTML (legacy)
Description
Generates HTML code for lightbox image gallery functionality using individual image files (legacy behavior).
Usage
print_lightbox_img_html_legacy(images)
Arguments
images |
Vector of image file paths |
Value
A character string containing HTML code for the lightbox gallery
Print plots in markdown format
Description
Generates markdown code to display plots in the report.
Usage
print_plots(plots, eye_suffix = NULL)
Arguments
plots |
Vector of plot file paths |
eye_suffix |
Optional eye suffix for binocular data |
Value
A character string containing markdown plot references
Process large database query in chunks
Description
Handles really large databases by processing queries in reasonably sized chunks to avoid memory issues. Data can be written to CSV or Parquet files as it's processed.
Usage
process_chunked_query(
con,
query,
chunk_size = 1e+06,
output_file = NULL,
process_chunk = NULL,
verbose = TRUE
)
Arguments
con |
Database connection |
query |
SQL query string to execute |
chunk_size |
Number of rows to fetch per chunk (default: 1000000) |
output_file |
Optional output file path for writing chunks. If provided, chunks will be appended to this file. File format determined by extension (.csv or .parquet) |
process_chunk |
Optional function to process each chunk. Function should accept a data.frame and return logical indicating success. If not provided and output_file is specified, chunks are written to file. |
verbose |
Whether to print progress messages (default: TRUE) |
Value
List containing summary information about the chunked processing
Examples
## Not run:
# These examples require an existing eyeris database
con <- eyeris_db_connect("/path/to/bids", "my-project")
# Process large query and write to CSV
process_chunked_query(
con,
"SELECT * FROM large_table WHERE condition = 'something'",
chunk_size = 50000,
output_file = "large_export.csv"
)
# Process large query with custom chunk processing
process_chunked_query(
con,
"SELECT * FROM large_table",
chunk_size = 25000,
process_chunk = function(chunk) {
# Custom processing here
processed_data <- some_analysis(chunk)
return(TRUE)
}
)
eyeris_db_disconnect(con)
## End(Not run)
Epoch and baseline processor
Description
This function processes a single block of pupil data to extract epochs and optionally compute and apply baseline corrections. It handles the core epoching and baselining logic for a single block of data.
Usage
process_epoch_and_baselines(eyeris, timestamps, evs, lims, hz, verbose)
Arguments
eyeris |
An object of class |
timestamps |
A list containing start and end timestamps |
evs |
Events specification for epoching (character vector or list) |
lims |
Time limits for epochs (numeric vector) |
hz |
Sampling rate in Hz |
verbose |
A flag to indicate whether to print detailed logging messages |
Details
This function is called by the internal epoch_and_baseline_block()
function.
Value
A list containing epoch and baseline results
Process eyeris data and create eyeris object
Description
Process eyeris data and create eyeris object
Usage
process_eyeris_data(x, block, eye, hz, pupil_type, file, binoc, binoc_mode)
Arguments
x |
The eyelinker object |
block |
Block specification |
eye |
Eye specification ("L", "R", "LR", "left", "right") |
hz |
Sample rate |
pupil_type |
Pupil data type |
file |
Original file path |
binoc |
Boolean binocular data detected |
binoc_mode |
Binocular mode ("average", "left", "right", "both") |
Value
An eyeris
object
Create a progress bar for tracking operations
Description
Creates a progress bar using the progress package with customizable formatting.
Usage
progress_bar(
total,
msg = "Processing",
width = 80,
show_percent = TRUE,
show_eta = TRUE,
clear = FALSE
)
Arguments
total |
The total number of items to process |
msg |
The message to display before the progress bar |
width |
The width of the progress bar in characters |
show_percent |
Whether to show percentage completion |
show_eta |
Whether to show estimated time remaining |
clear |
Whether to clear the progress bar when done |
Value
A progress bar object from the progress package
Prompt user for continuation
Description
Prompts the user to continue or cancel the current operation.
Usage
prompt_user()
Value
A logical flag indicating whether the user chose to continue
Read parquet files back into R
Description
Convenience function to read the parquet files created by eyeris_db_to_parquet back into a single data frame or list of data frames by data type.
Usage
read_eyeris_parquet(
parquet_dir,
db_name = NULL,
data_type = NULL,
return_list = FALSE,
pattern = "*.parquet",
verbose = TRUE
)
Arguments
parquet_dir |
Directory containing the parquet files, or path to database-specific folder |
db_name |
Optional database name to read from (if parquet_dir contains multiple database folders) |
data_type |
Optional data type to read (if NULL, reads all data types) |
return_list |
Whether to return a list by data type (TRUE) or combined data frame (FALSE, default) |
pattern |
Pattern to match parquet files (default: "*.parquet") |
verbose |
Whether to print progress messages (default: TRUE) |
Value
Combined data frame from all parquet files, or list of data frames by data type
Examples
# Minimal self-contained example that avoids database creation
if (requireNamespace("arrow", quietly = TRUE)) {
# create a temporary folder structure: parquet/<db_name>
base_dir <- file.path(tempdir(), "derivatives", "parquet")
db_name <- "example-db"
dir.create(file.path(base_dir, db_name), recursive = TRUE, showWarnings = FALSE)
# write two small parquet parts for a single data type
part1 <- data.frame(time = 1:5, value = 1:5)
part2 <- data.frame(time = 6:10, value = 6:10)
arrow::write_parquet(
part1,
file.path(
base_dir, db_name, paste0(db_name, "_timeseries_part-01-of-02.parquet")
)
)
arrow::write_parquet(
part2,
file.path(
base_dir, db_name, paste0(db_name, "_timeseries_part-02-of-02.parquet")
)
)
# read them back as combined data frame
data <- read_eyeris_parquet(base_dir, db_name = db_name)
# read as list by data type
data_by_type <- read_eyeris_parquet(base_dir, db_name = db_name, return_list = TRUE)
# read specific data type only
timeseries_data <- read_eyeris_parquet(base_dir, db_name = db_name, data_type = "timeseries")
}
Render R Markdown report
Description
Renders an R Markdown file to HTML and cleans up the temporary file.
Usage
render_report(rmd_f)
Arguments
rmd_f |
Path to the R Markdown file to render |
Value
No return value; renders HTML report and removes temporary file
Robust plotting function with error handling
Description
A wrapper around base plotting functions that handles errors and missing data gracefully.
Usage
robust_plot(y, x = NULL, ...)
Arguments
y |
The y-axis data to plot |
x |
The x-axis data (optional, defaults to sequence) |
... |
Additional arguments passed to plot() |
Value
No return value; creates a plot or displays warning messages
Internal function to run bidsify on a single eye
Description
Internal function to run bidsify on a single eye
Usage
run_bidsify(
eyeris,
save_all = TRUE,
epochs_list = NULL,
bids_dir = NULL,
participant_id = NULL,
session_num = NULL,
task_name = NULL,
run_num = NULL,
save_raw = TRUE,
html_report = TRUE,
report_seed = 0,
report_epoch_grouping_var_col = "matched_event",
eye_suffix = NULL,
verbose = TRUE,
csv_enabled = TRUE,
db_enabled = FALSE,
db_path = "my-project",
parallel_processing = FALSE,
raw_binocular_object = NULL,
skip_db_cleanup = FALSE
)
Arguments
eyeris |
An |
save_all |
Whether to save all data |
epochs_list |
A list of epochs to include |
bids_dir |
The directory to save the bids data |
participant_id |
The participant id |
session_num |
The session number |
task_name |
The task name |
run_num |
The run number |
save_raw |
Whether to save raw data |
html_report |
Whether to generate an html report |
report_seed |
The seed for the report |
report_epoch_grouping_var_col |
The column to use for grouping epochs in the report |
eye_suffix |
The suffix to add to the eye data |
verbose |
Whether to print verbose output |
csv_enabled |
Whether to save csv files |
db_enabled |
Whether to save data to the database |
db_path |
The path to the database |
parallel_processing |
Whether to enable parallel database processing |
raw_binocular_object |
The raw binocular object |
skip_db_cleanup |
Whether to skip database cleanup, used internally to avoid unintended overwriting when calling complementary binocular bidsify processing commands |
Value
An eyeris
object
Sanitize event tag string into canonical epoch label
Description
Converts event tag strings into standardized epoch labels by removing special characters and converting to camel case.
Usage
sanitize_event_tag(string, prefix = "epoch_")
Arguments
string |
The event tag string to sanitize |
prefix |
The prefix to add to the sanitized string (default: "epoch_") |
Value
A sanitized epoch label string
Save detrend plots for each block
Description
Generates and saves detrend diagnostic plots for each block in the eyeris
object.
Usage
save_detrend_plots(
eyeris,
out_dir,
preview_n = 3,
plot_params = list(),
eye_suffix = NULL,
verbose = TRUE
)
Arguments
eyeris |
An |
out_dir |
Output directory for saving plots |
preview_n |
Number of preview samples for plotting |
plot_params |
Additional plotting parameters |
eye_suffix |
Optional eye suffix for binocular data |
verbose |
Logical. Whether to print verbose output (default TRUE). |
Value
No return value; saves detrend plots to the specified directory
Save progressive summary plots for each block
Description
Generates and saves progressive summary plots for each block in the eyeris
object.
Usage
save_progressive_summary_plots(
eyeris,
out_dir,
preview_n = 3,
plot_params = list(),
eye_suffix = NULL,
verbose = TRUE
)
Arguments
eyeris |
An |
out_dir |
Output directory for saving plots |
preview_n |
Number of preview samples for plotting |
plot_params |
Additional plotting parameters |
eye_suffix |
Optional eye suffix for binocular data |
verbose |
Logical. Whether to print verbose output (default TRUE). |
Value
A character string containing markdown references to the saved plots
Check if binocular correlations should be plotted
Description
Validates that binocular correlations should be plotted.
Usage
should_plot_binoc_cors(x)
Arguments
x |
The |
Value
Logical indicating whether binocular correlations should be plotted
Slice epoch from raw time series data
Description
Extracts a time segment from raw time series data based on start and end times.
Usage
slice_epoch(x_raw, s, e)
Arguments
x_raw |
The raw time series data frame |
s |
Start time in milliseconds |
e |
End time in milliseconds |
Value
A data frame containing the epoch data
Slice epochs with no explicit limits
Description
Creates epochs using adjacent time stamps without explicit time limits.
Usage
slice_epochs_no_limits(x_raw, all_ts)
Arguments
x_raw |
The raw time series data frame |
all_ts |
A data frame containing timestamp information |
Value
A list of epoch data frames
Slice epochs with explicit limits
Description
Creates epochs using explicit time limits around a central timestamp.
Usage
slice_epochs_with_limits(x_raw, cur_ts, lims, hz)
Arguments
x_raw |
The raw time series data frame |
cur_ts |
The central timestamp |
lims |
Time limits in seconds (negative for before, positive for after) |
hz |
Sampling rate in Hz |
Value
A data frame containing the epoch data
Calculate pupil speed using finite differences
Description
Computes the speed of pupil changes using finite differences between consecutive time points. This is a helper function for the detransient step.
Usage
speed(x, y)
Arguments
x |
A numeric vector of pupil data |
y |
A numeric vector of time data |
Value
A vector of pupil speeds at each time point
Extract confounding variables calculated separately for each pupil data file
Description
Calculates various confounding variables for pupil data, including blink
statistics, gaze position metrics, and pupil size characteristics. These
confounds are calculated separately for each preprocessing step, recording
block, and epoched time series in the eyeris
object.
Usage
summarize_confounds(eyeris)
Arguments
eyeris |
An object of class |
Value
An eyeris
object with a new nested list of data frames:
$confounds
The confounds are organized hierarchically by block and preprocessing step.
Each step contains metrics such as:
Blink rate and duration statistics
Gaze position (x,y) mean and standard deviation
Pupil size mean, standard deviation, and range
Missing data percentage
Examples
# load demo dataset
demo_data <- eyelink_asc_demo_dataset()
# calculate confounds for all blocks and preprocessing steps
confounds <- demo_data |>
eyeris::glassbox() |>
eyeris::epoch(
events = "PROBE_{type}_{trial}",
limits = c(-1, 1), # grab 1 second prior to and 1 second post event
label = "prePostProbe" # custom epoch label name
) |>
eyeris::summarize_confounds()
# access confounds for entire time series for a specific block and step
confounds$confounds$unepoched_timeseries
# access confounds for a specific epoched time series
# for a specific block and step
confounds$confounds$epoched_timeseries
confounds$confounds$epoched_epoch_wide
Tag blinks in pupil data
Description
Identifies when pupil data corresponds to eye blinks based on missing values in the pupil vector.
Usage
tag_blinks(pupil_df, pupil_vec)
Arguments
pupil_df |
A data frame containing pupil data |
pupil_vec |
A numeric vector containing pupil diameter values |
Value
A data frame with added column:
-
is_blink
: Logical indicating if pupil data corresponds to a blink (NA values)
Tag gaze coordinates as on/off screen
Description
Identifies when gaze coordinates fall outside the screen boundaries, with an optional buffer zone to account for potential overshoot in eye tracking.
Usage
tag_gaze_coords(pupil_df, screen_width, screen_height, overshoot_buffer = 0.05)
Arguments
pupil_df |
A data frame containing gaze coordinates |
screen_width |
The screen width in pixels |
screen_height |
The screen height in pixels |
overshoot_buffer |
Additional buffer zone beyond screen edges
(default: |
Value
A data frame with added column:
-
is_offscreen
: Logical indicating if gaze is outside screen boundaries
Tick a progress bar
Description
Advances a progress bar by the specified amount.
Usage
tick(pb, by = 1)
Arguments
pb |
The progress bar object to tick |
by |
The number of steps to advance (default: |
Value
No return value; advances the progress bar
Write data to CSV and/or database (helper function)
Description
This helper function writes data to CSV files and/or database based on the configuration. Useful for large-scale cloud compute where CSV files may be unnecessary when using database storage.
Usage
write_csv_and_db(
data,
csv_path,
csv_enabled = TRUE,
db_con = NULL,
data_type = NULL,
sub = NULL,
ses = NULL,
task = NULL,
run = NULL,
eye_suffix = NULL,
epoch_label = NULL,
verbose = FALSE
)
Arguments
data |
Data frame to write |
csv_path |
Full path where CSV file should be written (ignored if csv_enabled = FALSE) |
csv_enabled |
Whether to write CSV files (defaults to TRUE for backward compatibility) |
db_con |
Database connection (NULL if not enabled) |
data_type |
Type of data ("timeseries", "epochs", "epoch_timeseries", "epoch_summary", "events", "blinks", "confounds") |
sub |
Subject ID |
ses |
Session ID |
task |
Task name |
run |
Run number (optional) |
eye_suffix |
Eye suffix for binocular data (optional) |
epoch_label |
Epoch label for epoched data (optional, used in table naming) |
verbose |
Whether to print verbose output |
Value
Logical indicating success
Write eyeris data to database
Description
Writes eyeris
data to the project database as an alternative to CSV files.
Creates or updates tables as needed.
Usage
write_eyeris_data_to_db(
data,
con,
data_type,
sub,
ses,
task,
run = NULL,
eye_suffix = NULL,
epoch_label = NULL,
append = TRUE,
verbose = FALSE
)
Arguments
data |
Data frame to write |
con |
Database connection |
data_type |
Type of data ("timeseries", "epochs", "epoch_timeseries", "epoch_summary", "events", "blinks") |
sub |
Subject ID |
ses |
Session ID |
task |
Task name |
run |
Run number |
eye_suffix |
Optional eye suffix for binocular data |
epoch_label |
Optional epoch label for epoched data (used in table naming) |
append |
Whether to append to existing table (default TRUE) |
verbose |
Whether to print verbose output |
Value
Logical indicating success
Zip and cleanup source figure files
Description
Creates zip files for all PNG and JPG files in each run directory under source/figures/, then deletes the individual image files. This reduces file count while preserving all figure data in compressed format.
Usage
zip_and_cleanup_source_figures(report_path, eye_suffix = NULL, verbose = FALSE)
Arguments
report_path |
Path to the report directory containing source/figures/ |
eye_suffix |
Optional eye suffix for binocular data |
verbose |
Whether to print verbose output |
Value
List of created zip file paths
Z-score pupil time series data
Description
The intended use of this method is to scale the arbitrary units of the pupil
size time series to have a mean of 0
and a standard deviation of 1
. This
is accomplished by mean centering the data points and then dividing them by
their standard deviation (i.e., z-scoring the data, similar to
base::scale()
). Opting to z-score your pupil data helps with trial-level
and between-subjects analyses where arbitrary units of pupil size recorded by
the tracker do not scale across participants, and therefore make analyses
that depend on data from more than one participant difficult to interpret.
Usage
zscore(eyeris, call_info = NULL)
Arguments
eyeris |
An object of class |
call_info |
A list of call information and parameters. If not provided, it will be generated from the function call |
Details
This function is automatically called by glassbox()
by default. Use
glassbox(zscore = FALSE)
to disable this step as needed.
Users should prefer using glassbox()
rather than invoking this function
directly unless they have a specific reason to customize the pipeline
manually.
In general, it is common to z-score pupil data within any given participant, and furthermore, z-score that participant's data as a function of block number (for tasks/experiments where participants complete more than one block of trials) to account for potential time-on-task effects across task/experiment blocks.
As such, if you use the eyeris
package as intended, you should NOT need
to specify any groups for the participant/block-level situations described
above. This is because eyeris
is designed to preprocess a single block of
pupil data for a single participant, one at a time. Therefore, when you later
merge all of the preprocessed data from eyeris
, each individual,
preprocessed block of data for each participant will have already been
independently scaled from the others.
Additionally, if you intend to compare mean z-scored pupil size across task conditions, such as that for memory successes vs. memory failures, then do NOT set your behavioral outcome (i.e., success/failure) variable as a grouping variable within your analysis. If you do, you will consequently obtain a mean pupil size of 0 and standard deviation of 1 within each group (since the scaled pupil size would be calculated on the time series from each outcome variable group, separately). Instead, you should compute the z-score on the entire pupil time series (before epoching the data), and then split and take the mean of the z-scored time series as a function of condition variable.
Value
An eyeris
object with a new column in time series
:
pupil_raw_{...}_z
Note
This function is part of the glassbox()
preprocessing pipeline and is not
intended for direct use in most cases. Use glassbox(zscore = TRUE)
.
Advanced users may call it directly if needed.
See Also
glassbox()
for the recommended way to run this step as
part of the full eyeris glassbox preprocessing pipeline
Examples
demo_data <- eyelink_asc_demo_dataset()
demo_data |>
eyeris::glassbox(zscore = TRUE) |> # set to FALSE to skip (not recommended)
plot(seed = 0)
Internal function to z-score pupil data
Description
This function z-scores pupil data by subtracting the mean and dividing by the standard deviation.
This function is called by the exposed wrapper zscore()
Usage
zscore_pupil(x, prev_op)
Arguments
x |
A data frame containing pupil data |
prev_op |
The name of the previous operation in the pipeline |
Value
A vector of z-scored pupil data