Minterest: Code Development Policy
In this article, we will share how the code for the Minterest protocol is created, reviewed, and deployed, along with the toolkits and best practices used.
Code Creation
Step 1. Planning
When developing a new feature, our product and development teams work together to get a high-level feature design ready. Then, developers structure detailed requirements for the feature and place it within the broader scope of the development roadmap.
Minterest development is done using agile methodologies and planned into “Sprints,” 2-week periods focused on delivering set amounts of work for incremental functionality improvements. The art of sprint engineering includes setting priorities, estimating the timeframes and distributing assignments among team members who have the most relevant skill set and expertise in the domain. Then, we execute, test, and review results.
Step 2. Coding Guidelines
Developers follow quality guidelines grounded in the best practices of Solidity code development leveraging OpenZeppelin’s knowledge base. Their Guide is utilised by teams around the world.
Step 3. Coding in Style
Consistency in code style is vital to ensure readability. We check that our code follows an internal Code Style guideline based on the official Solidity Style Guide. This includes details from naming conventions to code layout.
Step 4. Analysis Tools
To catch potential issues early, we utilise several industry-leading static analysis tools and benchmarks.
- Slither: A powerful static analysis tool that scans Solidity code for vulnerabilities and errors, acting as the primary quality gate in the review process.
- Solhint: Used to standardise and format the code, ensuring alignment with the Style guide.
- Hardhat: A versatile toolkit that helps maintain unit test coverage at a 98% level, validated by Codecov. Hardhat also supports various testing and deployment workflows, making it an integral part of our development process.
- Mocha: A testing framework that helps create benchmark tests.
- Finally, the contracts are covered with integration tests that check that changes on the blockchain level do not interfere with the web infrastructure.
Step 5. Simulations
To test how the code would function in a real world setting, simulations are designed to walk through various user behaviour scenarios.
In a general scenario the team defines all potential user groups, including lenders, borrowers, stakers, vesting owners, and parties thereof.
Then, a live protocol simulation test is conducted, where those user groups perform multiple operations like lending, borrowing, staking, and withdrawing.
The outcome is then compared to the expected results to answer one simple question: in real life, could a user experience any challenges or negatively impact the protocol through their actions? If yes, the decision is made on how to modify the code, and the revision loop iterates until we are satisfied with the output.
Step 6. Integration Testing
The final step before peer review involves integration tests. These tests ensure that changes at the blockchain level do not disrupt the web infrastructure or other critical components of the system.
Code Review
Step 1. Peer Review
Once the code creator is satisfied with the initial result, a peer who has familiarity with the architecture and codebase of Minterest is selected to review it. This approach allows for fresh eyes to evaluate the work and catch potential issues.
The selected reviewer mindfully reads the code, tests it and reaffirms that the output serves the end goal.
Step 2. Verification Against Known Issues
The peer reviewer runs the fresh code through a checklist of known issues. We track different levels of vulnerabilities and so-called “smelly code” — bad practices that should be avoided. The team uses a comprehensive Security checklist compiled from credible sources like Beirão, to guide this process.
Independent Peer Review
We are now adding independent peer reviewers as part of the development pipeline to enhance code quality and security. This involves engaging with external experts to provide an unbiased evaluation of the underlying codebase.
Step 1: Engaging External Reviewers
We select reviewers with expertise in blockchain and smart contract security from trusted partners and the broader community. They bring fresh perspectives to identify potential issues.
Step 2: Review Process
These external reviewers thoroughly analyse the code and documentation, focusing on security vulnerabilities and best practices, using the same tools and standards as our internal team.
Step 3: Feedback and Iteration
The feedback is reviewed with our team, necessary revisions are made, and the code is retested until both parties are confident in its security and reliability.
This process adds an extra layer of scrutiny, reinforcing our commitment to delivering secure and reliable smart contracts.
Third-Party Audit
Security Audit Policy
The final line of defence is the third-party audit. Audits are specifically scheduled for code that directly interacts with user assets. This includes code changes that impact money markets, MINTY distribution, or processes related to assets, user roles, or contract ownership.
Other changes are accumulated for periodic full-scale audits, like previously audited integrations with bridges or Internal changes in liquidation engine contracts.
Step 1. Audit Prep: Tag and Organise
Both code creator and a peer reviewer mark the code that requires third-party auditing. This organisation helps streamline the audit process and ensures that no critical changes are overlooked.
Step 2. Audit Scope
We document the changes from the previous audits and intended functionality in fine detail, catering to the auditing firm’s needs.
Then, we prioritise revision areas based on the deployment schedule and coordinate with the audit provider to establish a timeline for deliverables.
Step 3. Audit Kickoff
The batched code and documentation are sent to the third-party audit team. Throughout the audit, our team provides any additional context or explanations needed to help the auditors fully understand the changes.
Step 4. Audit Process
When the audit team is ready to provide results, we review the audit report together with the audit team to make sure we understand all findings.
We then prioritise and implement necessary fixes, re-test the code to ensure no new issues arise, and resubmit the update to the audit team for another round of reviews. This process iterates until the teams agree to sign off.
Finally, the auditor provides a report with the finalised review, findings and recommendations.
Step 5. Go Live
Finally, the code is deployed to the main network. Post-deployment, we monitor the code to ensure stability, performance, and security.
Conclusion
This document is provided to show our commitment to transparency and continuous improvement by sharing Minterest’s development process. If you are a domain expert with feedback, please share it with our team on any of our social channels.
And, thank you to the community for your continued support of Minterest.
02, August 2024