Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ var/
.installed.cfg
*.egg

.env

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
Expand Down
95 changes: 48 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,45 +8,23 @@ A comprehensive membership evaluations solution for Computer Science House.
Development
-----------

### Config
## Running (containerized)

You must create `config.py` in the top-level directory with the appropriate credentials for the application to run. See `config.env.py` for an example.

#### Add OIDC Config
Reach out to an RTP to get OIDC credentials that will allow you to develop locally behind OIDC auth
```py
# OIDC Config
OIDC_ISSUER = "https://sso.csh.rit.edu/auth/realms/csh"
OIDC_CLIENT_CONFIG = {
'client_id': '',
'client_secret': '',
'post_logout_redirect_uris': ['http://0.0.0.0:6969/logout']
}
```

#### Add S3 Config
An S3 bucket is used to store files that users upload (currently just for major project submissions). In order to have this work properly, you need to provide some credentials to the app.
It is likely easier to use containers like `podman` or `docker` or the corresponding compose file

There are 2 ways that you can get the needed credentials.
1. Reach out to an RTP for creds to the dev bucket
2. Create your own bucket using [DEaDASS](https://deadass.csh.rit.edu/), and the site will give you the credentials you need.
With podman, I have been using

```py
S3_URI = env.get("S3_URI", "https://s3.csh.rit.edu")
S3_BUCKET_ID = env.get("S3_BUCKET_ID", "major-project-media")
AWS_ACCESS_KEY_ID = env.get("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.get("AWS_SECRET_ACCESS_KEY", "")
```sh
podman compose up --watch
```

#### Database
You can either develop using the dev database, or use the local database provided in the docker compose file

Using the local database is detailed below, but both options will require the dev database password, so you will have to ask an RTP for this too

#### Forcing evals/rtp or anything else
All of the role checking is done in `conditional/utils/user_dict.py`, and you can change the various functions to `return True` for debugging
If you want, you can run without auto rebuild using
```sh
podman compose up --force-recreate --build
```
Which can be restarted every time changes are made.

### Run (Without Docker)
## Run (Without Docker)

To run the application without using containers, you must have the latest version of [Python 3](https://www.python.org/downloads/) and [virtualenv](https://virtualenv.pypa.io/en/stable/installation/) installed. Once you have those installed, create a new virtualenv and install the Python dependencies:

Expand Down Expand Up @@ -90,30 +68,53 @@ or
python -m gunicorn
```

### Run (containerized)
## Config

It is likely easier to use containers like `podman` or `docker` or the corresponding compose file

With podman, I have been using
You must create `config.py` in the top-level directory with the appropriate credentials for the application to run. See `config.env.py` for an example.

```sh
podman compose up --watch
### Add OIDC Config
Reach out to an RTP to get OIDC credentials that will allow you to develop locally behind OIDC auth
```py
# OIDC Config
OIDC_ISSUER = "https://sso.csh.rit.edu/auth/realms/csh"
OIDC_CLIENT_CONFIG = {
'client_id': '',
'client_secret': '',
'post_logout_redirect_uris': ['http://0.0.0.0:6969/logout']
}
```

If you want, you can run without compose support using
```sh
podman compose up --force-recreate --build
### Add S3 Config
An S3 bucket is used to store files that users upload (currently just for major project submissions). In order to have this work properly, you need to provide some credentials to the app.

There are 2 ways that you can get the needed credentials.
1. Reach out to an RTP for creds to the dev bucket
2. Create your own bucket using [DEaDASS](https://deadass.csh.rit.edu/), and the site will give you the credentials you need.

```py
S3_URI = env.get("S3_URI", "https://s3.csh.rit.edu")
S3_BUCKET_ID = env.get("S3_BUCKET_ID", "major-project-media")
AWS_ACCESS_KEY_ID = env.get("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.get("AWS_SECRET_ACCESS_KEY", "")
```

Which can be restarted every time changes are made
### Database
You can either develop using the dev database, or use the local database provided in the docker compose file

Using the local database is detailed below, but both options will require the dev database password, so you will have to ask an RTP for this too

### Forcing evals/rtp or anything else
All of the role checking is done in `conditional/utils/user_dict.py`, and you can change the various functions to `return True` for debugging



### Dependencies
## Dependencies

To add new dependencies, add them to `requirements.in` and then run `pip-compile requirements.in` to produce a new locked `requirements.txt`. Do not edit `requirements.txt` directly as it will be overwritten by future PRs.

### Database Stuff
## Database Stuff

#### Local database
### Local database

You can run the database locally using the docker compose

Expand All @@ -130,7 +131,7 @@ To run migration commands in the local database, you can run the commands inside
podman exec conditional flask db upgrade
```

#### Database Migrations
### Database Migrations

If the database schema is changed after initializing the database, you must migrate it to the new schema by running:

Expand Down
4 changes: 2 additions & 2 deletions conditional/blueprints/conditional.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,8 @@ def conditional_review(user_dict=None):

if status == 'Passed':
account = ldap_get_member(uid)
hp = account.housingPoints
ldap_set_housingpoints(account, hp + 2)
hp = int(account.housingPoints)
ldap_set_housingpoints(account, str(hp + 2))

elif cond_obj.i_evaluation:
FreshmanEvalData.query.filter(FreshmanEvalData.id == cond_obj.i_evaluation).update(
Expand Down
4 changes: 2 additions & 2 deletions conditional/blueprints/slideshow.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,8 +138,8 @@ def slideshow_spring_review(user_dict=None):
if ldap_is_intromember(account):
ldap_set_not_intro_member(account)

hp = account.housingPoints
ldap_set_housingpoints(account, hp + 2)
hp = int(account.housingPoints)
ldap_set_housingpoints(account, str(hp + 2))
elif status == "Failed":
if ldap_is_intromember(account):
ldap_set_failed(account)
Expand Down
4 changes: 2 additions & 2 deletions conditional/util/ldap.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,13 +136,13 @@ def ldap_set_inactive(account):

def ldap_set_intro_member(account):
_ldap_add_member_to_group(account, 'intromembers')
ldap_get_intro_members().cache_clear()
ldap_get_intro_members.cache_clear()
ldap_get_member.cache_clear()


def ldap_set_not_intro_member(account):
_ldap_remove_member_from_group(account, 'intromembers')
ldap_get_intro_members().cache_clear()
ldap_get_intro_members.cache_clear()
ldap_get_member.cache_clear()


Expand Down
2 changes: 2 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ services:
- conditional-postgres
ports:
- "127.0.0.1:8080:8080"
env_file:
- .env
volumes:
- ./migrations:/opt/conditional/migrations
develop:
Expand Down
10 changes: 0 additions & 10 deletions migrations/versions/e38beaf3e875_update_db.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,6 @@

def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index('member_batch_users_id_idx', table_name='member_batch_users')
op.drop_table('member_batch_users')
op.drop_index('freshman_batch_pulls_id_idx', table_name='freshman_batch_pulls')
op.drop_table('freshman_batch_pulls')
op.drop_index('member_batch_pulls_id_idx', table_name='member_batch_pulls')
op.drop_table('member_batch_pulls')
op.drop_index('freshman_batch_users_id_pkey', table_name='freshman_batch_users')
op.drop_table('freshman_batch_users')
op.drop_table('batch_conditions')
op.alter_column('freshman_accounts', 'onfloor_status',
existing_type=sa.BOOLEAN(),
nullable=True)
Expand All @@ -37,7 +28,6 @@ def upgrade():
op.alter_column('member_hm_attendance', 'attendance_status',
existing_type=postgresql.ENUM('Attended', 'Excused', 'Absent', name='attendance_enum'),
nullable=True)
op.drop_table('batch')
# ### end Alembic commands ###


Expand Down
Loading