How to Deploy Django Channels 2.x on AWS Elastic Beanstalk (Amazon Linux 2)
Deploying Django Channels on Elastic Beanstalk is certainly a very arduous task. While there are certain resources available (in particular this medium post), they are still outdated and don't work directly on Amazon Linux 2.
In this article, I'll walk you through the steps I followed to finally get Django Channels up and running on Elastic Beanstalk with Amazon Linux 2. While we're on it, I'll also add the entire ebextensions configuration to get a production ready Django setup on Elastic Beanstalk.
Elastic Beanstalk Configuration
Step 1. Add the basic setup config: /.ebextensions/01_setup.config
packages:
yum:
amazon-linux-extras: []
python:
supervisor: []
commands:
01_postgres_activate:
command: sudo amazon-linux-extras enable postgresql10
02_postgres_install:
command: sudo yum install -y postgresql-devel
03_make_supervisor_log_directory:
command: sudo mkdir -p /var/log/supervisor/
04_make_conf_directory:
command: sudo mkdir -p /etc/supervisor/conf.d/
05_restart_supervisor:
command: sudo /sbin/service supervisord restart
files:
"/usr/local/etc/supervisord.conf":
mode: "000644"
owner: root
group: root
content: |
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = /etc/supervisor/conf.d/*.conf
; Change according to your configurations
[inet_http_server]
port = 127.0.0.1:9001
"/etc/init.d/supervisord":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=/usr/bin/supervisorctl
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
This configuration will do the following:
- Install amazon linux extras and then use that to install PostgreSQL
packages: yum: amazon-linux-extras: []
- Install PostgreSQL via amazon-linux-extras
commands: 01_postgres_activate: command: sudo amazon-linux-extras enable postgresql10 02_postgres_install: command: sudo yum install -y postgresql-devel
- Install supervisor (via easy_install)
packages: ... python: supervisor: []
- Create necessary supervisor directories and restart
commands: ... 03_make_supervisor_log_directory: command: sudo mkdir -p /var/log/supervisor/ 04_make_conf_directory: command: sudo mkdir -p /etc/supervisor/conf.d/ 05_restart_supervisor: command: sudo /sbin/service supervisord restart
- Add supervisor config files
files: "/usr/local/etc/supervisord.conf": ... "/etc/init.d/supervisord": ...
Step 2: Add django configuration: /.ebextensions/02_python.config
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "<Path to Django>.settings"
"PYTHONPATH": "/var/app/current:$PYTHONPATH"
"aws:elasticbeanstalk:container:python":
WSGIPath: talkdoc_api.wsgi:application
NumProcesses: 3
NumThreads: 20
"aws:elasticbeanstalk:environment:proxy:staticfiles":
"/static": "static"
container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python manage.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "source /var/app/venv/*/bin/activate && python manage.py collectstatic --noinput"
03_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
This configuration will do the following:
- Tell Elastic Beanstalk where to find the Django settings file
- Tell Elastic Beanstalk which Python path to use
- Tell Elastic Beanstalk where to find the wsgi file (for gunicorn)
- Tell Elastic Beanstalk the static files configuration
option_settings: "aws:elasticbeanstalk:application:environment": DJANGO_SETTINGS_MODULE: "<Django Root App>.settings" "PYTHONPATH": "/var/app/current:$PYTHONPATH" "aws:elasticbeanstalk:container:python": WSGIPath: <Django Root App>.wsgi:application NumProcesses: 3 NumThreads: 20 "aws:elasticbeanstalk:environment:proxy:staticfiles": "/static": "static"
- Next, the 3 container commands are used for running migrations, running collectstatic and finally passing "WSGIPassAuthorization On" so that we can pass authentication headers to Django for our websockets.
Note: Replacecontainer_commands: 01_migrate: command: "source /var/app/venv/*/bin/activate && python manage.py migrate --noinput" leader_only: true 02_collectstatic: command: "source /var/app/venv/*/bin/activate && python manage.py collectstatic --noinput" 03_wsgipass: command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
<Django Root App>
with your root app.
Step 3: Add configuration for load balancer: /.ebextensions/03_https.config
option_settings:
aws:elbv2:listener:443:
ListenerEnabled: 'true'
SSLCertificateArns: <SSL ARN>
Protocol: HTTPS
Rules: ws
aws:elbv2:listenerrule:ws:
PathPatterns: /ws/*
Process: websocket
Priority: 1
aws:elasticbeanstalk:environment:process:websocket:
Port: '5000'
Protocol: HTTP
Resources:
AWSEBV2LoadBalancerListener:
Type: 'AWS::ElasticLoadBalancingV2::Listener'
Properties:
LoadBalancerArn: { "Ref" : "AWSEBV2LoadBalancer" }
DefaultActions:
- RedirectConfig:
Port: 443
Protocol: HTTPS
StatusCode: HTTP_301
Type: redirect
Port: 80
Protocol: HTTP
Note: Replace SSL ARN
with your SSL ARN.
This configuration will do the following:
- Enable load balancer to listen on port 443 and also add a rule for websockets (ws). Note: This doesn't have to be
ws
option_settings: aws:elbv2:listener:443: ListenerEnabled: 'true' SSLCertificateArns: <SSL ARN> Protocol: HTTPS Rules: ws
- Define the rule for websocket (ws)
aws:elbv2:listenerrule:ws: PathPatterns: /ws/* Process: websocket Priority: 1
- Define the
websocket
processaws:elasticbeanstalk:environment:process:websocket: Port: '5000' Protocol: HTTP
- Link the load balancer resource
Resources: AWSEBV2LoadBalancerListener: Type: 'AWS::ElasticLoadBalancingV2::Listener' Properties: LoadBalancerArn: { "Ref" : "AWSEBV2LoadBalancer" } DefaultActions: - RedirectConfig: Port: 443 Protocol: HTTPS StatusCode: HTTP_301 Type: redirect Port: 80 Protocol: HTTP
Step 4: Add elastic cache config: /.ebextensions/04_elasticache.config
Resources:
MyCacheSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "Lock cache down to webserver access only"
SecurityGroupIngress :
- IpProtocol : "tcp"
FromPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
ToPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
SourceSecurityGroupName:
Ref: "AWSEBSecurityGroup"
MyElastiCache:
Type: "AWS::ElastiCache::CacheCluster"
Properties:
CacheNodeType:
Fn::GetOptionSetting:
OptionName : "CacheNodeType"
DefaultValue : "cache.m3.large"
NumCacheNodes:
Fn::GetOptionSetting:
OptionName : "NumCacheNodes"
DefaultValue : "1"
Engine:
Fn::GetOptionSetting:
OptionName : "Engine"
DefaultValue : "redis"
VpcSecurityGroupIds:
-
Fn::GetAtt:
- MyCacheSecurityGroup
- GroupId
Outputs:
ElastiCache:
Description : "ID of ElastiCache Cache Cluster with Redis Engine"
Value :
Ref : "MyElastiCache"
The above configuration will setup Elasticache which we can then use for websockets.
Now it's time to setup the supervisor daemon script that will run daphne for us. Amazon Linux 2 provides various platform hooks. We will be using the postdeploy hook.
Step 5: Creating a copy of environment variables. Create a new file: /.platform/hooks/postdeploy/01_set_env.sh
#!/bin/bash
#Create a copy of the environment variable file.
cp /opt/elasticbeanstalk/deployment/env /opt/elasticbeanstalk/deployment/custom_env_var
#Set permissions to the custom_env_var file so this file can be accessed by any user on the instance. You can restrict permissions as per your requirements.
chmod 644 /opt/elasticbeanstalk/deployment/custom_env_var
#Remove duplicate files upon deployment.
rm -f /opt/elasticbeanstalk/deployment/*.bak
For my use cases, I was using python-dotenv which means to get those environment variables, I had to do more work:
export $(sudo cat /opt/elasticbeanstalk/deployment/env | xargs)
if [ $PROJECT_ENV = 'staging' ]
then
ENV_PATH=/var/app/current/.env/staging.env
else
ENV_PATH=/var/app/current/.env/prod.env
fi
cat $ENV_PATH > /opt/elasticbeanstalk/deployment/django_env_var
chmod 644 /opt/elasticbeanstalk/deployment/django_env_var
You can skip adding this to the 01_set_env.sh
file if you don't use python-dotenv.
Step 6: Add supervisor daemon file: /.platform/hooks/postdeploy/02_run_supervisor_daemon.sh
#!/bin/bash
# Get system environment variables
systemenv=`cat /opt/elasticbeanstalk/deployment/custom_env_var | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/:$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
systemenv=${systemenv%?}
systemenv=`echo $systemenv | sed 's/,/",/g' | sed 's/=/="/g'`
systemenv="$systemenv\""
# Get Django environment variables, comment if not using python-dotenv
djangoenv=`cat /opt/elasticbeanstalk/deployment/django_env_var | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g'`
allenv="$systemenv,$djangoenv"
# Create daemon configuration script
daemonconf="[program:daphne]
command=daphne -b :: -p 5000 <Django Root App>.asgi:application
directory=/var/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
environment=$allenv
"
# Create the Supervisor conf script
echo "$daemonconf" | sudo tee /etc/supervisor/conf.d/daemon.conf
# Reread the Supervisor config
supervisorctl reread
# Update Supervisor in cache without restarting all services
supervisorctl update
# Start/restart processes through Supervisor
supervisorctl restart daphne
Note: If you're not using python-dotenv, then change environment=$allenv
to environment=$systemenv
Django Configuration
With this, all configuration changes are finally complete! Now we can move on to Django related configuration:
Install
channels_redis
pip install channels_redis
Add the following to Django settings file:
CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels_redis.core.RedisChannelLayer', 'CONFIG': { "hosts": [(<REDIS ELASTICACHE HOSTNAME>, 6379)], }, }, }
Troubleshooting Open a Django shell in your EC2 instance and test that your channel layer is working:
$ python manage.py shell
>>> import channels.layers
>>> from asgiref.sync import async_to_sync
>>> channel_layer = channels.layers.get_channel_layer()
>>> async_to_sync(channel_layer.send)('test_channel',
{'foo': 'bar'})
>>> async_to_sync(channel_layer.receive)('test_channel')
{'foo': 'bar'}
Now that all the changes are complete, we can deploy them. If you haven't created the Elastic Beanstalk environment yet, then create it first:
eb create -v
Finally, deploy:
eb deploy -v
I hope you find this blog post useful. I spent a lot of time setting up Django Channels and so thought it would be a good idea to share my learnings with the rest of the community.
I will continue to edit this post to make it better. If you have any questions or corrections, please let me know!